Usenet History: Implementation and User Experience

Before you begin to understand the implementation of Usenet, you must know about two critical things which contributed. First, the University of North Carolina Computer Science department had a Unix machine equipped with a slow, small disk, a slow-performing CPU, and most importantly system with a low RAM. This machine is supposed to be slower than most time-sharing machines for 1979. Duke CS had a relatively faster computer, the 11/70. Since it was the first implementation, UNC’s offering of 11/45 had to be used. During 1979, there was no Internet, and departments were not connected to the ARPANET, so logging in remotely was not an option. Using a dial-up network, billed per minute, costed daytime charges. A speed of 9600 bps could be topped if connected via the Gandalf port selector.

The second important thing to bear in mind is that the first implementation would involve few experiments. The first-ever public announcement of Usenet read out the problems faced during the implementation. Many amateurs worked together on this plan, but it was time to get started. Once Usenet was made available, a committee could be formed and later that committee could use the net to begin analyzing what the problems were. A network protocol had not been designed before. A few experiments had to be carried out to get things right. Do keep in mind that Tom Truscott and Jim Ellis had programming experience and were experienced system admins. Tom had communications software experience and had been programming kernel-level software for about 14 years.

Implementation of Usenet

The strategy implemented for developing the Usenet was rapid prototyping. The first version of Netnews software was implemented as a Bourne Shell Script. The script had features such as cross-posting and multiple newsgroups and was 150 lines long.

But why use a shell script for programming? The simple reason is: Compiling a program took a very long time. The more the waiting time to compile made Tom start something new. Most of the code made use of string-handling concepts and C programming was not good for string-handling. You could have written a string library, but it was time-consuming as the compilation speeds were low. With shell script, you could try out new things and develop the code incrementally. The shell script is slow to use in production and didn’t run that quickly when executed. It was not much of a hindrance as it was not a production program. It was a prototype intended to be used to create a file format. Once everything was in place, it was re-written in C programming language.

Implementation details

Regrettably, the script version and the C version of the implementation are not available today. However, Tom remembered a few of the implementation details. The subscribed newsgroups of a user were being saved in an environmental variable set in the .profile file. There were commands to retrieve all the articles the particular user had read previously. The retrieval was possible through $HOME/ .netnews to mark with the current time. On successful exit, only the last read time was getting saved. The script was not written to have the capabilities to read out of an order, to skip articles to read them later or to even stop reading midway in an article. The limitation was due to an assumption error—only a couple of articles would be read per day. The incoming traffic today is 60 tebibytes per day. The prediction was off by many orders of magnitude.

Other implementations

Another implementation was not to display cross-posted articles more than once. The cross-posted articles appeared as a single file linked from multiple directories. This technique was not only helpful to find duplicates but also saved adequate disk space. At that time, disk space was quite expensive.

Few other points worth knowing: Redundancy of a global coordinator as each line had an article ID with the site name, period, and a sequence number. The filenames were the articles’ IDs that had a character limitation. A database may have helped to store the files. But a single database of all news would require a locking mechanism which was hard to achieve on a 7th Edition Unix. Pipes had to be created before the processes, so the file system had to be relied upon. The UI resembled the 7th Edition mail command—it was simple and worked seamlessly for exchanging low-volume mails.

Usenet History: File Format

The designers of the file format over the wire knew that it would not be perfect at their first attempt. The first decision they took was that the transmitted file’s first letter would be “A” for this version.

Why were email style headers not used in the beginning?

Many people might ask that why email-style headers were not used initially, though it was later used for HTTP. The key reason for not doing so was that many of them did not have any exposure to such protocols during that time. The author admits that even he got to know about Internet Protocol after receiving a copy of a workbook two years later. It was because of the USENET that he was aware of such protocols.

The designers instead chose the minimalist style, which was influenced by the seventh edition of Unix. Had they been aware of the Internet that was known as ARPANET those days, they would have avoided it deliberately. A shell script was the first version of their code. They felt it was easier to deal with complete lines as single entities. Also, continuation lines, optional white space, and parse headers, which enabled arbitrary case was definitely simpler.

Issue of duplicate articles

They also had the issue of how to handle duplicate articles. The designers felt that an article ID was an absolute necessity so that duplicate detection would be allowed. At that time, they decided to have the article ID as the remaining part of the 1st line after the letter A.

The designers also wished to minimize the costs for transfers. At that time, article transmissions were carried out by costly, dial-up connections. So, transmitting a file that was not required also required them to spend a lot of bucks. As such, articles then had to have a series of systems to indicate that the article was already seen.

This information comprised a string of hostnames and exclamation points that separated them. The last element was the user’s login name who posted it.

A pertinent question could be why that particular format was chosen rather than something like blanks or commas as separators. The format chosen by the designers was used by UUCP for email.

Today, the scenario has changed entirely as there is full connectivity over the web. Things are no longer done in the same way. Rather, a party would transmit a series of article IDs. The party would then ask for the ones that have not been seen.

It is interesting to note that the designers had contemplated something of that sort but then decided to reject it. After all, they were using dial-up connections that were infrequent to relay articles. Alternatively, the count of loops and as such, duplicate articles received did not appear to be high.

In the original scheme of plans, the Duke would poll several sites once per night. If Duke sent a list of articles to the sites during that call, they were not allowed to request for it until the succeeding night. Also, they would not get those articles until the following night.

However, such a delay was not acceptable. The designers instead decided to have the possibility of transmitting unnecessary text. There would be some additional transmissions on certain occasions. However, it was felt that such a volume would be acceptable. It was an era before MP3 and JPG formats. So, we are talking about only text articles. Thus, these would be relatively tiny and inexpensive as well.

It was obvious that the article’s title and date would also have to be sent. The library routines asctime() and ctime() were used to generate the time and dateline. The designers had made up their minds from the start that there was a requirement to have articles in multiple categories i.e. newsgroups. However, there was just a single relayed newsgroup called NET in the original design. There were no differences between various types of non-local articles.

Why was there cross-posting of articles?

Finally, there was one more interesting thing to note. They were aware from the beginning that some articles could be a part of multiple categories. Hence, they supported the cross-posting of articles in different newsgroups from the start. Although some people considered cross-posting to be impolite, the feature was intentionally included from the beginning.

Usenet History: How the hardware problem was solved?

The initial plan

When Dukenet initially conceptualized Usenet, the planners had three things in mind.

  • They wanted a way by which they could send local administrative messages.
  • Their goal was to create a system that was networked. (The idea incidentally came from grad students of the University).
  • UUCP would be used to ensure communication between sites. UUCP was the only option they had with sites run on Unix. They only needed a single dial-up modem port to run UUCP.

Running UUCP

Running UUCP called for a single dial-up modem port. The issue here was about the dialing. Someone had to make the call and pay the charges. Auto-dial modems did not exist (the Hayes Smartmodem came much later. The leased Bell autodialer was too expensive. Usenet was an unstructured project. Buying a modem was itself an issue. Paying monthly lease charges would not be workable. The solution planned was to use an acoustic coupler, which could act as the interface device. The solution was what Duke could afford.

The solution developed by the grad students worked like this:

  • The phone handset had to be put into tight-fit cups. The electronic part had to be connected to the computer.
  • The computer received the bits after which the coupler would send sounds through a speaker from where it went to the microphone of the handset.
  • Similarly, the mike in the coupler listened to noises that corresponded to the bits. The mike then sent voltage signals to the computer.
  • Since sounds were used to connect to the telephonic network, there would be no objection from the telephone company. AT&T did object later, but they fell in line.

The Dialing problem

When the dialing happened manually, this solution worked well. All that had to be done was to pick the handset, make the call, and placed the handset in the coupler. An issue remained, which was how the computer would do the dialing. The connection from the coupler to the computer happened through the RS-232 standard. The modem pins were five in number. They were ground, CD (carrier detect), transmit, receive, and DTR (Data Terminal Ready).

When the modem connected with the computer, i.e.: when the serial port was opened, the DTR signal would be sent. When the modem was connected, a ‘carrier detect’ signal would be sent. When the connection at the other end dropped, the modem then dropped CD. The signal then returned to the calling program. The solution was using a DTR signal. It helped solve Duke CS’s needs.

Duke implemented this solution successfully. Prof. Steven Bellovin of Columbia University liked the idea and thus created his variant. The following is a description of Prof. Bellovin’s variant:

  • An open relay was put with the phone in series to simulate the on-hook (when the landline handset was not in use).
  • The DTR signal was used when the computer needed to use the modem.
  • The DTR line was wired so it would close the relay and out the phone line off-hook. The moment the computer opened the device, the phone would be off-hook. When the computer closed the device, the phone would again be on-hook. It was a smart solution to manage this issue.

Prof. Bellovin then created a driver program. The program controlled the DTR line. The driver program ensured the modem, as well as the dialer, were seen as two different devices by UUCP.

Now came the last and most serious problem, who would foot the bill? During those times phone calls were pretty expensive. Calling during normal working hours would be extremely expensive. Phone calls in the evening would cost lesser and night calls the lowest.

The solution worked out was that Duke would take the responsibility for the calls since they had the autodialer. Any site wanting to join the network had to get a modem with an auto-answer feature and pay Duke. It was decided that the system would make calls at night and keep it to not more than two times to keep expenses at a minimum.

This plan called for money to be exchanged. There would be a spike in the phone bills. Duke had to receive as well as process payments from other sites. Usenet happened because it had the official sanction. It also materialized because the faculty members valued innovations by graduate students.

Usenet History: The Technological Setting

Usenet history

Usenet, Netnews, was founded almost exactly forty years ago this very week. In order to better understand where it came from or why certain decisions were made the way they were, it is important to take into consideration the technological shortcomings of the time.

Early Part Of History

The mainframes were still roaming across the world in 1979, around Steven Bellovin, founder of AT&T, was in college. In reality, it was the predominant method of computation. The IBM PC will have been around 2 years old in the future. The microprocessors of those days, as they were known, had much less space for anything more or less important. As such minicomputers, which were smaller, just the size of one or possibly two refrigerators, were used for specific applications. Most definitely in research laboratories such as process control. The super mini-computers with low I/O bandwidth and good processing ability were getting cheaper.

Unix operated the Digital Equipment Corporation (DEC) PDP-11 on a common line of microcomputers at this period. The PDP-11 used to have a 16-bit network address (although with the correct OS, you could almost duplicate it by using a 16-bit network address for directions and a different one for data). Capacity was restricted to a very few megabytes based on the configuration to 10s of kilobytes (yes, kilobytes). No particular program was allowed to access upwards of 64K at a point. The additional physical memory implied that without switching, a context transfer could always be achieved as other processes could also be memory-resident.

Early Networking Issues

Networking was not possible for many people. There was ARPANET, but to do so you wanted to be a lobbyist for defense or institution with a DARPA research grant. IBM had different modes of connectivity based on licensed synchronous communications systems. At minimum a common packet-switched infrastructure existed (and only few were connected to the network through a very limited number of old frameworks for the dial-up package mobile entry).

One other thing half-common was really the 300 bps dial-up modem. Just launched the Bell 212A full duplex, dial-up modem was uncommon. Why does this happen? It would have to be rented by the telecommunications company more or less: Ma Bell, more officially known as AT&T. Purchasing your own modems was legal, and have it hardwired to your telephone network. It was feasible to go over a rented adapter called the DAA (data access arrangements) to “secure the phone network.”

The Beginning of Usenet

However, Usenet was conceptualized in a world of regulation slightly different. Duke University served by Duke Telecom, a university body (and Durham was GTE). Whereas Chapel Telecommunications, the University owned by phones, electricity, sewer and water systems was supported by UNC Chapel Hill. Steven Bellovin was a student, and around that time the government ordered the services to dive.

Steven Bellovin along with few others, with Duke’s support, have introduced the Unix 6th issue as part-time operating system on our PDP11. Some staff were sufficiently motivated to spend enough money on purchasing a decent 8-port converter and also more RAM. This may have been our core storage, though around the time semiconductor RAM was beginning to get affordable. Shortly afterwards we had a couple of VAX-11/780, but Usenet was born on the sluggish, tiny 11/45.

The Catalyst Of Networking

The wish to update to Unix’s 7th version was the imminent catalyst for Usenet. Upon its 6th Unix version Duke used an update they received from other locations to deliver messages as they signed in to announcements. But it wasn’t always convenient to send certain messages. It needed a 5-line letter to print 300 bps — 30 characters per second. This update isn’t even slightly consistent with 7th Edition login command- a new implementation was required. And UUCP (Unix to Unix Copy), an interconnection method, was available in the 7th Version.

Usenet NBA Mock Draft: The 1st place to geek out about the next class of rookies entering the NBA

People are always fond of free stuff, especially if they are available on the Internet. However, the consequences are not positive when a platform, which was meant for communication, becomes congested by free stuff.

That is precisely what Usenet is all about. The protocols date back to around four decades and is a key hub for sharing files today. In its simplest form, Usenet can be looked upon as an online protocol aimed at conversations. However, things changed when the public realized that binary files can be passed through it.

Why was there a dramatic shift in Usenet’s digital role over time?

It was in 1979 when a couple of students from Duke University introduced the idea of “netnews.” There were several rounds of improvements before the service came to be known as Usenet. The protocol was initially referred to as “A-News” before a chain of improvements took leverage of the UUCP or Unix-to-Unix protocol.

It is a distributed technique to copy files between computing devices, which was constructed alongside the network, then known as ARPANET. The ARPANET later came to be known as what we refer to as the Internet today. The protocol became compatible with this network.

Two software developers- Jim Ellis and Tom Truscott ensured that the software could be made available to all Unix hosts. It took just a few years for the protocol to emerge as one of the preferred ways to communicate across the Internet. The network was still in its formative years then.

The creation of Usenet originated from the idea that computing devices were transitioning into sophisticated devices for holding conversations. There were so much conversation and communication going on. Usenet could be compared to today’s Reddit except that it did not have a true owner and was decentralized.

However, the roots of this protocol lied in UUCP, which was a peer—to—peer mechanism for sharing files when it is broken down. It also signified that it was an effective means to share files. While the protocol was designed to just share text, programmers decided to enhance the technology.

A graduate student of the University of California Berkeley, Mary Ann Horton, was engaged in designing the early protocols of UUCP. She assisted in creating a link between the broader Internet and the protocol. Horton was highly influenced by what eventually became Usenet.

Horton used her special skills to further give a shape to Usenet. She further improvised on the work done by Ellis and Truscott. It was the same time when the developer was given the responsibility for creating a piece of software called Uuencode. It would eventually emerge as the solution to the legacies of Usenet, as well as email.


The software had some functional similarities to today’ file format. It effectively functioned as a link between raw text and binary files. When one runs a Uuencode command on any binary file, the result would be some jumbled texts. A second user can use this command to convert the text file into a binary file.

The software was extremely useful because of its usage in email attachments. It also came in handy for distributing binary codes through Usenet. A web browser is adequately smart to decode the blocks of gibberish text. There are commands, which are not readable by humans to make them function in that system.

However, Uuencode came with its share of imperfections. Its text encoding was not always resourceful. The wastes were including more overhead. Also, there were more complications in the encoded text files than they were meant to be. However, there have been improvisations in that idea since then.

However, there was effectiveness in enabling the file transfer on a large scale. The software was particularly useful to distribute files via Usenet. The reason is the encoded file would be transferring between waystations.

Usenets decentralized nature- a limitation?

The decentralized design of Usenet made it tough to filter out and omit the unwanted stuff and spam. For instance, the FBI could not terminate a Usenet group even when it was unlawfully sharing episodes of a TV sitcom.

However, Internet Service Providers had the liberty to decide not carrying newsgroups. Several legitimate newsgroups that did not infringe the copyright or have explicit content were affected because of this. However, such changes did not kill Usenet though it was a big wound to its popularity.

Gets Exclusive Content & Expert Advice

Subscribe to our marketing newsletter to get the latest tips and advice delivered to your inbox each month!

Email Address*

Connect With Us

Featured Posts