Blog

Usenet History: Authentication and Norms

Usenet is a computer-based international distributed discussion system. The particular Unix-to-Unix Copy (UUCP) dial-up network scaffolding was used to create it. This idea was devised by Tom Truscott and Jim Ellis in 1979, and the company was founded in 1980. At the time users could read and post messages (known as articles or postings and together referred to as news) to around one or more newsgroups. In many ways, Usenet is similar to a bulletin board system (BBS), and it was the forerunner of the commonly used Internet forums. These discussions were threaded, similar to the web forums and bulletin boards we see today, except that these posts were saved chronologically on the server.

Authentication & Security:                                     

Usenet was now in function. Well, what now? Truscott and his group were aware that it would need some sort of an administering system. This also meant that they soon realized that they lacked in the authentication arena as well. Truscott and his group needed an authentication system for sites, users, and even posts. Even though they were aware that they would need something like a public cryptic key to authenticate it, Truscott and his group weren’t aware of how a site’s public key could be authenticated.

They could’ve used a certificate issued by a certificate authority, but they didn’t know it at the time. Even if they had known, there was no way for them to get it done as there were no such authorities present at the moment. Also, they couldn’t simply create one themselves while also creating Usenet. Their knowledge in this subject ran very thin, and they had questions that they could not quite find answers to. For example, what exactly was secure, or what would be the ideal length of the key?

This shortcoming brought them to consider alternate options: What if their sites could authenticate each other in a peer-based authentication system? They considered the idea of this neighborhood authentication for a while but were ultimately met with failure because of loopholes in its logic. In order to spoof an authentication, a user had to simply fake its path line to claim to be a couple of neighboring sites away. This could potentially cause security concerns.

Security was quickly becoming a roadblock that kept them stagnant for a while. They realized that their approach to authentication had several loopholes in them, though anyone with the slightest knowledge of Usenet could get through.

To understand this flaw in the system, it is important to note that messages on Usenet were sent using a generic remote execution mechanism. This means that the site ran a command to read the next-in-line computer that had the ‘rnews’ command running. This is exactly what was concerning. If someone knew about this detail in the algorithm, they could easily just manipulate the neighbor’s key to get an authentication.

At the time, it wouldn’t have probably caused a big issue because the number of users was very low as the internet was only used by a handful of people. But making something that only gave off a perception of security instead of real security wouldn’t have been very ethical to do.

Norms:

In principle, detecting misconduct and determining who perpetrated it will be simple. Like UNIX, the uucp system isn’t built to avoid excess consumption problems. Which uses of the internet are actually misuse, and what should be done regarding them, would be revealed via experiences.

Abuse of the internet may be quite dangerous. They may be talked about, sought for, and sometimes even coded against, just like conventional misdeeds, but only experiences can reveal what counts. Uucp gives a certain level of security. This operates as a regular user with stringent access restrictions. It is reasonable to argue that it does not offer a higher hazard than a call-in line. But the main issue was that they had no idea what to do or what other difficulties would arise. They were concerned about security to some level. They actually encountered their earliest hackers in 1971, when some behavior resulted in a console alert. This ordeal prompted them to inspect the punch cards for the code in question — but it was far from their major concern. They made a fast-and-easy password guesser and let a few individuals know that their passwords were terrible and could lead to potential hacking.

Usenet History: Implementation and User Experience

Before you begin to understand the implementation of Usenet, you must know about two critical things which contributed. First, the University of North Carolina Computer Science department had a Unix machine equipped with a slow, small disk, a slow-performing CPU, and most importantly system with a low RAM. This machine is supposed to be slower than most time-sharing machines for 1979. Duke CS had a relatively faster computer, the 11/70. Since it was the first implementation, UNC’s offering of 11/45 had to be used. During 1979, there was no Internet, and departments were not connected to the ARPANET, so logging in remotely was not an option. Using a dial-up network, billed per minute, costed daytime charges. A speed of 9600 bps could be topped if connected via the Gandalf port selector.

The second important thing to bear in mind is that the first implementation would involve few experiments. The first-ever public announcement of Usenet read out the problems faced during the implementation. Many amateurs worked together on this plan, but it was time to get started. Once Usenet was made available, a committee could be formed and later that committee could use the net to begin analyzing what the problems were. A network protocol had not been designed before. A few experiments had to be carried out to get things right. Do keep in mind that Tom Truscott and Jim Ellis had programming experience and were experienced system admins. Tom had communications software experience and had been programming kernel-level software for about 14 years.

Implementation of Usenet

The strategy implemented for developing the Usenet was rapid prototyping. The first version of Netnews software was implemented as a Bourne Shell Script. The script had features such as cross-posting and multiple newsgroups and was 150 lines long.

But why use a shell script for programming? The simple reason is: Compiling a program took a very long time. The more the waiting time to compile made Tom start something new. Most of the code made use of string-handling concepts and C programming was not good for string-handling. You could have written a string library, but it was time-consuming as the compilation speeds were low. With shell script, you could try out new things and develop the code incrementally. The shell script is slow to use in production and didn’t run that quickly when executed. It was not much of a hindrance as it was not a production program. It was a prototype intended to be used to create a file format. Once everything was in place, it was re-written in C programming language.

Implementation details

Regrettably, the script version and the C version of the implementation are not available today. However, Tom remembered a few of the implementation details. The subscribed newsgroups of a user were being saved in an environmental variable set in the .profile file. There were commands to retrieve all the articles the particular user had read previously. The retrieval was possible through $HOME/ .netnews to mark with the current time. On successful exit, only the last read time was getting saved. The script was not written to have the capabilities to read out of an order, to skip articles to read them later or to even stop reading midway in an article. The limitation was due to an assumption error—only a couple of articles would be read per day. The incoming traffic today is 60 tebibytes per day. The prediction was off by many orders of magnitude.

Other implementations

Another implementation was not to display cross-posted articles more than once. The cross-posted articles appeared as a single file linked from multiple directories. This technique was not only helpful to find duplicates but also saved adequate disk space. At that time, disk space was quite expensive.

Few other points worth knowing: Redundancy of a global coordinator as each line had an article ID with the site name, period, and a sequence number. The filenames were the articles’ IDs that had a character limitation. A database may have helped to store the files. But a single database of all news would require a locking mechanism which was hard to achieve on a 7th Edition Unix. Pipes had to be created before the processes, so the file system had to be relied upon. The UI resembled the 7th Edition mail command—it was simple and worked seamlessly for exchanging low-volume mails.

Usenet History: File Format

The designers of the file format over the wire knew that it would not be perfect at their first attempt. The first decision they took was that the transmitted file’s first letter would be “A” for this version.

Why were email style headers not used in the beginning?

Many people might ask that why email-style headers were not used initially, though it was later used for HTTP. The key reason for not doing so was that many of them did not have any exposure to such protocols during that time. The author admits that even he got to know about Internet Protocol after receiving a copy of a workbook two years later. It was because of the USENET that he was aware of such protocols.

The designers instead chose the minimalist style, which was influenced by the seventh edition of Unix. Had they been aware of the Internet that was known as ARPANET those days, they would have avoided it deliberately. A shell script was the first version of their code. They felt it was easier to deal with complete lines as single entities. Also, continuation lines, optional white space, and parse headers, which enabled arbitrary case was definitely simpler.

Issue of duplicate articles

They also had the issue of how to handle duplicate articles. The designers felt that an article ID was an absolute necessity so that duplicate detection would be allowed. At that time, they decided to have the article ID as the remaining part of the 1st line after the letter A.

The designers also wished to minimize the costs for transfers. At that time, article transmissions were carried out by costly, dial-up connections. So, transmitting a file that was not required also required them to spend a lot of bucks. As such, articles then had to have a series of systems to indicate that the article was already seen.

This information comprised a string of hostnames and exclamation points that separated them. The last element was the user’s login name who posted it.

A pertinent question could be why that particular format was chosen rather than something like blanks or commas as separators. The format chosen by the designers was used by UUCP for email.

Today, the scenario has changed entirely as there is full connectivity over the web. Things are no longer done in the same way. Rather, a party would transmit a series of article IDs. The party would then ask for the ones that have not been seen.

It is interesting to note that the designers had contemplated something of that sort but then decided to reject it. After all, they were using dial-up connections that were infrequent to relay articles. Alternatively, the count of loops and as such, duplicate articles received did not appear to be high.

In the original scheme of plans, the Duke would poll several sites once per night. If Duke sent a list of articles to the sites during that call, they were not allowed to request for it until the succeeding night. Also, they would not get those articles until the following night.

However, such a delay was not acceptable. The designers instead decided to have the possibility of transmitting unnecessary text. There would be some additional transmissions on certain occasions. However, it was felt that such a volume would be acceptable. It was an era before MP3 and JPG formats. So, we are talking about only text articles. Thus, these would be relatively tiny and inexpensive as well.

It was obvious that the article’s title and date would also have to be sent. The library routines asctime() and ctime() were used to generate the time and dateline. The designers had made up their minds from the start that there was a requirement to have articles in multiple categories i.e. newsgroups. However, there was just a single relayed newsgroup called NET in the original design. There were no differences between various types of non-local articles.

Why was there cross-posting of articles?

Finally, there was one more interesting thing to note. They were aware from the beginning that some articles could be a part of multiple categories. Hence, they supported the cross-posting of articles in different newsgroups from the start. Although some people considered cross-posting to be impolite, the feature was intentionally included from the beginning.

Usenet History: How the hardware problem was solved?

The initial plan

When Dukenet initially conceptualized Usenet, the planners had three things in mind.

  • They wanted a way by which they could send local administrative messages.
  • Their goal was to create a system that was networked. (The idea incidentally came from grad students of the University).
  • UUCP would be used to ensure communication between sites. UUCP was the only option they had with sites run on Unix. They only needed a single dial-up modem port to run UUCP.

Running UUCP

Running UUCP called for a single dial-up modem port. The issue here was about the dialing. Someone had to make the call and pay the charges. Auto-dial modems did not exist (the Hayes Smartmodem came much later. The leased Bell autodialer was too expensive. Usenet was an unstructured project. Buying a modem was itself an issue. Paying monthly lease charges would not be workable. The solution planned was to use an acoustic coupler, which could act as the interface device. The solution was what Duke could afford.

The solution developed by the grad students worked like this:

  • The phone handset had to be put into tight-fit cups. The electronic part had to be connected to the computer.
  • The computer received the bits after which the coupler would send sounds through a speaker from where it went to the microphone of the handset.
  • Similarly, the mike in the coupler listened to noises that corresponded to the bits. The mike then sent voltage signals to the computer.
  • Since sounds were used to connect to the telephonic network, there would be no objection from the telephone company. AT&T did object later, but they fell in line.

The Dialing problem

When the dialing happened manually, this solution worked well. All that had to be done was to pick the handset, make the call, and placed the handset in the coupler. An issue remained, which was how the computer would do the dialing. The connection from the coupler to the computer happened through the RS-232 standard. The modem pins were five in number. They were ground, CD (carrier detect), transmit, receive, and DTR (Data Terminal Ready).

When the modem connected with the computer, i.e.: when the serial port was opened, the DTR signal would be sent. When the modem was connected, a ‘carrier detect’ signal would be sent. When the connection at the other end dropped, the modem then dropped CD. The signal then returned to the calling program. The solution was using a DTR signal. It helped solve Duke CS’s needs.

Duke implemented this solution successfully. Prof. Steven Bellovin of Columbia University liked the idea and thus created his variant. The following is a description of Prof. Bellovin’s variant:

  • An open relay was put with the phone in series to simulate the on-hook (when the landline handset was not in use).
  • The DTR signal was used when the computer needed to use the modem.
  • The DTR line was wired so it would close the relay and out the phone line off-hook. The moment the computer opened the device, the phone would be off-hook. When the computer closed the device, the phone would again be on-hook. It was a smart solution to manage this issue.

Prof. Bellovin then created a driver program. The program controlled the DTR line. The driver program ensured the modem, as well as the dialer, were seen as two different devices by UUCP.

Now came the last and most serious problem, who would foot the bill? During those times phone calls were pretty expensive. Calling during normal working hours would be extremely expensive. Phone calls in the evening would cost lesser and night calls the lowest.

The solution worked out was that Duke would take the responsibility for the calls since they had the autodialer. Any site wanting to join the network had to get a modem with an auto-answer feature and pay Duke. It was decided that the system would make calls at night and keep it to not more than two times to keep expenses at a minimum.

This plan called for money to be exchanged. There would be a spike in the phone bills. Duke had to receive as well as process payments from other sites. Usenet happened because it had the official sanction. It also materialized because the faculty members valued innovations by graduate students.

Usenet History: The Technological Setting

Usenet history

Usenet, Netnews, was founded almost exactly forty years ago this very week. In order to better understand where it came from or why certain decisions were made the way they were, it is important to take into consideration the technological shortcomings of the time.

Early Part Of History

The mainframes were still roaming across the world in 1979, around Steven Bellovin, founder of AT&T, was in college. In reality, it was the predominant method of computation. The IBM PC will have been around 2 years old in the future. The microprocessors of those days, as they were known, had much less space for anything more or less important. As such minicomputers, which were smaller, just the size of one or possibly two refrigerators, were used for specific applications. Most definitely in research laboratories such as process control. The super mini-computers with low I/O bandwidth and good processing ability were getting cheaper.

Unix operated the Digital Equipment Corporation (DEC) PDP-11 on a common line of microcomputers at this period. The PDP-11 used to have a 16-bit network address (although with the correct OS, you could almost duplicate it by using a 16-bit network address for directions and a different one for data). Capacity was restricted to a very few megabytes based on the configuration to 10s of kilobytes (yes, kilobytes). No particular program was allowed to access upwards of 64K at a point. The additional physical memory implied that without switching, a context transfer could always be achieved as other processes could also be memory-resident.

Early Networking Issues

Networking was not possible for many people. There was ARPANET, but to do so you wanted to be a lobbyist for defense or institution with a DARPA research grant. IBM had different modes of connectivity based on licensed synchronous communications systems. At minimum a common packet-switched infrastructure existed (and only few were connected to the network through a very limited number of old frameworks for the dial-up package mobile entry).

One other thing half-common was really the 300 bps dial-up modem. Just launched the Bell 212A full duplex, dial-up modem was uncommon. Why does this happen? It would have to be rented by the telecommunications company more or less: Ma Bell, more officially known as AT&T. Purchasing your own modems was legal, and have it hardwired to your telephone network. It was feasible to go over a rented adapter called the DAA (data access arrangements) to “secure the phone network.”

The Beginning of Usenet

However, Usenet was conceptualized in a world of regulation slightly different. Duke University served by Duke Telecom, a university body (and Durham was GTE). Whereas Chapel Telecommunications, the University owned by phones, electricity, sewer and water systems was supported by UNC Chapel Hill. Steven Bellovin was a student, and around that time the government ordered the services to dive.

Steven Bellovin along with few others, with Duke’s support, have introduced the Unix 6th issue as part-time operating system on our PDP11. Some staff were sufficiently motivated to spend enough money on purchasing a decent 8-port converter and also more RAM. This may have been our core storage, though around the time semiconductor RAM was beginning to get affordable. Shortly afterwards we had a couple of VAX-11/780, but Usenet was born on the sluggish, tiny 11/45.

The Catalyst Of Networking

The wish to update to Unix’s 7th version was the imminent catalyst for Usenet. Upon its 6th Unix version Duke used an update they received from other locations to deliver messages as they signed in to announcements. But it wasn’t always convenient to send certain messages. It needed a 5-line letter to print 300 bps — 30 characters per second. This update isn’t even slightly consistent with 7th Edition login command- a new implementation was required. And UUCP (Unix to Unix Copy), an interconnection method, was available in the 7th Version.

Gets Exclusive Content & Expert Advice

Subscribe to our marketing newsletter to get the latest tips and advice delivered to your inbox each month!

Email Address*

Connect With Us

Featured Posts