What are Usenet groups? Features and benefits of Usenet groups

usenet groups

Usenet is a network communication system that has been in use for a very long time. Usenet comprises newsgroups that have messages and articles posted by different users from across the world. The concept of Usenet is similar to that of the Bulletin Board system.

Usenet was the precursor to the various internet forums that we have today. The concept evolved in 1979 and started at a university. It works through a network of news servers.

What are Usenet groups?

Usenet or ‘Unix Users Network’ has many discussion groups that are known as Usenet groups. These individual groups or forums are the core of Usenet. The groups are discussion forums for members. The name ‘news’ is a misnomer because they do not publish any news nor are they related to the media.

What are the Features of Usenet Groups?

The following are some of the features of Usenet groups that explain how these groups work:

· Usenet has both servers connected to the internet and servers not part of the internet.

· It uses NNTP or Network News Transfer Protocol for transmitting data and for sharing files.

· If you have access to Usenet, you can subscribe to various newsgroups or discussion groups. You can then follow the discussions in the group and share your content. The content is not restricted to text and includes other media.

· You can also create your discussion group and start posting in it.

· Newsgroups can be moderated or unmoderated. A moderated group has an admin who reviews the content regularly. Unmoderated groups have no checking and all posts directly appear on the group.

· There is no central server, all news servers communicate with each other.

· You need a client known as the newsreader to access Usenet. You also need the services of a Usenet service provider and a search engine to find what you are looking for.

· Most groups allow users to share torrents and files with other members.

What are the Benefits of Usenet Groups?

The popularity of Usenet groups is due to the various benefits they offer:

· Usenet groups are not very expensive. You only need to sign up with a service provider and get a newsreader. There is a lot of information available on Usenet groups that can be very valuable. The money you spend will be quite less compared to the value you get.

· There are newsgroups devoted to niche domains where you can get a lot of valuable information. Particularly, in technical areas, these newsgroups tend to be very popular.

· There is a higher level of security in Usenet groups, as compared to social media platforms that offer a similar interaction.

· Usenet groups are different from social media platforms. On social media, you need to sign up by providing your identity. Usenet, on the other hand, is anonymous. No one will know who you are. If you want to discuss sensitive matters or are a whistleblower, then Usenet groups can be extremely useful.

· The groups are very secure thanks to the encryption used. It ensures protection for all users.

· Usenet servers are very fast compared to the conventional internet servers. Downloading torrents or files is a breeze when you work with Usenet.

· We live in a knowledge era where knowledge is power. Usenet groups allow you to access knowledge from varied sources. This is one of the prime reasons why Usenet groups are so popular.

Cost of Usenet Groups

Google Groups offer a free solution but then you cannot expect the best. If you want to tap into all that Usenet offers, you need to sign up with a good Usenet service provider.

If you are looking to sign up for Usenet, then you can consider XS Usenet. They are a service provider that has been offering Usenet and VPN services since 2009. They are the first free provider in the world. While the free service is available, it has restrictions.

The packages that XS Usenet offers are:

1. Free Usenet that has 2Mbit speed with a 25 GB data limit with 5 connections. No SSL and no posting. The free plan is a basic one meant for people to understand what Usenet is.

2. The Premium Usenet is priced at 7.99 Euros per month and has unlimited speed and unlimited data. It also has 50 connections with SSL and posting.

3. VPN and Usenet plan is available that has all the features of premium Usenet and comes with VPN Security. It offers full encryption and is priced at 10.99 Euros per month.

Usenet History: Usenet Growth and B-News

When you think of Usenet, you have to focus on the computer-based global network. The primary purpose of Usenet thrived from its very name “use of the internet”. The major discussion forums on Usenet have been the news providers, news discussion, and information exchange.

Tracing history, we reach Reed College, University of Oklahoma, and UC Berkeley. All the top-tier educational institutions with their own researches on stake and an invitation to the future of networking. Out of these, Berkeley was the most progressive – being on ARPANET.

The curious minds of Jim Ellis and Tom Truscott around 1979 were shuffling along with this idea of “Usenet”. It was their graduation time, and naturally, their minds were sprawling in ideas to change the world and the way it functions. They were doubtful but braver. Another factor was the lucky timing. The idea would have slipped in thin air had there not been supportive successors at Berkeley.

Growth of Usenet

Berkeley was the ground, where the initial thinkers of Usenet met Mary Horton (a meticulous Ph.D. student). She, alongside Bellovin, brought TCP/IP to the Usenet Lab Trials. They experimented with various network lines. They tried to combine mailing lists into Usenet groups. This was indeed a revolutionary idea – unforeseen at that time. The groups were divided based on interests – quite logical as interests are a prime way to connect and be a part of a social group.

With the potential of actual traffic on the Usenet Network, it was an exciting way to share interests and connect with like-minded people. There were two noted divisions – SF LOVERS and HUMAN NETS. SF LOVERS stood for science fiction enthusiasts – which were quite a lot. These systems were dedicated to understand and debate on science fiction. HUMAN NETS was a little more complex than SF LOVERS. It was based on the general networking attitude of people. The primary purpose was just to find people. To connect with people who you could talk to about things that are interesting and that are substantial.

Usenet Backing

All these activities were permissible given the grounds of Berkeley. Many other such budding networks were often shut down, either through wrong execution or through authority. Usenet was non-commercial. It did not want to raise any dollars, and thus it was not perceived as any threat to any entity.


After sites became active on the network, Usenet rose dramatically. There were more newsgroups, articles, and information spots. But, with these came the B-news. The Usenet server is dedicated to news, with advanced improvements. A very impressive feature of the B-news was to read articles in a randomized order. It even encouraged messages. The human psyche was the same though – even messages then resulted in fake news and pranks. So there was more control demanded on this feature.

An improvement over A-news was the directory storage of B-news. The directors were able to form a hierarchical structure and introduce the concept of subdirectories. This was great for databases and further discussions. B-news was, thus, significantly better than A-news. The news showed the latest news and kept updating on that loop. It did not analyze or provide the space to analyze. B-news worked on every loop-hole of A-news. It even encouraged more advancement.

Usenet Consequences and Negative Notions

Usenet was indeed quite beneficial and like a layer of freshness in those times of technological uprising. There were, though, some negative notions as well. The site was unable to carry the load at times – crashing and not loading. There was some mischief around the messages and emailing systems too. The original purpose of the network was fading in the chaos of connections and information overload.

With the latest changes in technology, many negative consequences of the Usenet are repaired and sought after. They have merged with a more-refined and foul-proof systems. There are new servers as well.

Usenet History: The Public Announcement

UseNet history

Usenix was a relatively small and informal organization in the eighties. Its members wanted to launch Usenet in a Usenix meeting in 1980. Those were the days when the organization held its meetings at universities instead of big conference halls and meeting rooms of plush hotels. The meeting was held in Boulder. While Steven Bellovin was not present in the meeting, it was attended by Jim Ellis and Tom Truscott.

Bellovin felt that apart from announcing about Usenet, they required non-experimental code while his prototype was not adequate to succeed. However, he does not exactly remember the exact deficiencies present in his C version. At the same time, he thinks that one of them could be the code’s inability to configure, which neighboring websites would get what newsgroups.

Issues while declaring Usenet

Incidentally, Stephen Daniel came up with the code, which was referred to as “A-news”. A crucial alteration was the feature to exist with multiple hierarchies instead of simply having the original “NET.” or “NET”. The production version also had the possibility to configure which hierarchies or groups a website would get.

The members believed that the configuration should be in a file instead of being included in an array within the code. This particular point was not always considered. For instance, UUCP had an array for listing the commands that remote websites were allowed to execute.

To execute the rnews command, a system administrator had to alter the source code before recompiling. While looking back, this procedure looked like a wrong decision. However, in those days, some argued that it was justifiable to do so. After all, there are very few commands at that time.

Sorting the problems

The group decided to sort out this problem and came up with a mail-to-rnews program. It means a sending website would be able to email articles instead of directly trying to execute rnews. A clock-driven system was designed for retrieving the email messages and then transferring them to rnews.

 The system had to be clock-driven as there was no other procedure available in those days to get the email directly delivered to a file or a program. As such, the A-news’ remote site configuration file is required to have a command for the execution.

The group wanted the first articles to cover issues such as trouble reports, general requests for help, and bug fixes. At that time, there was a strong positive culture to offer mutual assistance among developers not only in companies like Usenix but also in the world of the IBM mainframe.

Another proposal was to locate source code that appeared interesting without flooding them to the network. The reason for that was the chances of software being bulky and phone calls at that time were costly. In those times, the phone rates at nighttime were around 0.50 USD for 3 minutes.

It was also a time when at 30 bytes/per second or 300 bps, one could transmit a maximum of 5400 bytes. If protocol overhead had to be taken into account, the conservative estimate was about 3000 bytes or one kilobyte every minute.

So, the source to UUCP is approximately 120KB at 1 KB per second, which is 20 USD or 2 hours. If inflation is to be adjusted, that is more than 60 USD in today’s time. Also, most people do not wish to have most packages.

Lack of adequate bandwidth

Another issue was Duke had just a couple of autodialers. So, they did not have the necessary bandwidth to transfer large files to several places. If they tried to do so, it would stop all news transfers to other websites. Rather, the suggestion was that perhaps Duke could act as a central repository. The software might be retrieved on demand. UUNET adopted this model later on.

However, the most interesting point is that the announcement did not discuss any possibility of non-technical use. There were no hobby discussions, social discussions, or political discussions. They did not think people wanted to discuss issues with someone they did not meet ever.

Usenet History: Authentication and Norms

Usenet is a computer-based international distributed discussion system. The particular Unix-to-Unix Copy (UUCP) dial-up network scaffolding was used to create it. This idea was devised by Tom Truscott and Jim Ellis in 1979, and the company was founded in 1980. At the time users could read and post messages (known as articles or postings and together referred to as news) to around one or more newsgroups. In many ways, Usenet is similar to a bulletin board system (BBS), and it was the forerunner of the commonly used Internet forums. These discussions were threaded, similar to the web forums and bulletin boards we see today, except that these posts were saved chronologically on the server.

Authentication & Security:                                     

Usenet was now in function. Well, what now? Truscott and his group were aware that it would need some sort of an administering system. This also meant that they soon realized that they lacked in the authentication arena as well. Truscott and his group needed an authentication system for sites, users, and even posts. Even though they were aware that they would need something like a public cryptic key to authenticate it, Truscott and his group weren’t aware of how a site’s public key could be authenticated.

They could’ve used a certificate issued by a certificate authority, but they didn’t know it at the time. Even if they had known, there was no way for them to get it done as there were no such authorities present at the moment. Also, they couldn’t simply create one themselves while also creating Usenet. Their knowledge in this subject ran very thin, and they had questions that they could not quite find answers to. For example, what exactly was secure, or what would be the ideal length of the key?

This shortcoming brought them to consider alternate options: What if their sites could authenticate each other in a peer-based authentication system? They considered the idea of this neighborhood authentication for a while but were ultimately met with failure because of loopholes in its logic. In order to spoof an authentication, a user had to simply fake its path line to claim to be a couple of neighboring sites away. This could potentially cause security concerns.

Security was quickly becoming a roadblock that kept them stagnant for a while. They realized that their approach to authentication had several loopholes in them, though anyone with the slightest knowledge of Usenet could get through.

To understand this flaw in the system, it is important to note that messages on Usenet were sent using a generic remote execution mechanism. This means that the site ran a command to read the next-in-line computer that had the ‘rnews’ command running. This is exactly what was concerning. If someone knew about this detail in the algorithm, they could easily just manipulate the neighbor’s key to get an authentication.

At the time, it wouldn’t have probably caused a big issue because the number of users was very low as the internet was only used by a handful of people. But making something that only gave off a perception of security instead of real security wouldn’t have been very ethical to do.


In principle, detecting misconduct and determining who perpetrated it will be simple. Like UNIX, the uucp system isn’t built to avoid excess consumption problems. Which uses of the internet are actually misuse, and what should be done regarding them, would be revealed via experiences.

Abuse of the internet may be quite dangerous. They may be talked about, sought for, and sometimes even coded against, just like conventional misdeeds, but only experiences can reveal what counts. Uucp gives a certain level of security. This operates as a regular user with stringent access restrictions. It is reasonable to argue that it does not offer a higher hazard than a call-in line. But the main issue was that they had no idea what to do or what other difficulties would arise. They were concerned about security to some level. They actually encountered their earliest hackers in 1971, when some behavior resulted in a console alert. This ordeal prompted them to inspect the punch cards for the code in question — but it was far from their major concern. They made a fast-and-easy password guesser and let a few individuals know that their passwords were terrible and could lead to potential hacking.

Usenet History: Implementation and User Experience

Before you begin to understand the implementation of Usenet, you must know about two critical things which contributed. First, the University of North Carolina Computer Science department had a Unix machine equipped with a slow, small disk, a slow-performing CPU, and most importantly system with a low RAM. This machine is supposed to be slower than most time-sharing machines for 1979. Duke CS had a relatively faster computer, the 11/70. Since it was the first implementation, UNC’s offering of 11/45 had to be used. During 1979, there was no Internet, and departments were not connected to the ARPANET, so logging in remotely was not an option. Using a dial-up network, billed per minute, costed daytime charges. A speed of 9600 bps could be topped if connected via the Gandalf port selector.

The second important thing to bear in mind is that the first implementation would involve few experiments. The first-ever public announcement of Usenet read out the problems faced during the implementation. Many amateurs worked together on this plan, but it was time to get started. Once Usenet was made available, a committee could be formed and later that committee could use the net to begin analyzing what the problems were. A network protocol had not been designed before. A few experiments had to be carried out to get things right. Do keep in mind that Tom Truscott and Jim Ellis had programming experience and were experienced system admins. Tom had communications software experience and had been programming kernel-level software for about 14 years.

Implementation of Usenet

The strategy implemented for developing the Usenet was rapid prototyping. The first version of Netnews software was implemented as a Bourne Shell Script. The script had features such as cross-posting and multiple newsgroups and was 150 lines long.

But why use a shell script for programming? The simple reason is: Compiling a program took a very long time. The more the waiting time to compile made Tom start something new. Most of the code made use of string-handling concepts and C programming was not good for string-handling. You could have written a string library, but it was time-consuming as the compilation speeds were low. With shell script, you could try out new things and develop the code incrementally. The shell script is slow to use in production and didn’t run that quickly when executed. It was not much of a hindrance as it was not a production program. It was a prototype intended to be used to create a file format. Once everything was in place, it was re-written in C programming language.

Implementation details

Regrettably, the script version and the C version of the implementation are not available today. However, Tom remembered a few of the implementation details. The subscribed newsgroups of a user were being saved in an environmental variable set in the .profile file. There were commands to retrieve all the articles the particular user had read previously. The retrieval was possible through $HOME/ .netnews to mark with the current time. On successful exit, only the last read time was getting saved. The script was not written to have the capabilities to read out of an order, to skip articles to read them later or to even stop reading midway in an article. The limitation was due to an assumption error—only a couple of articles would be read per day. The incoming traffic today is 60 tebibytes per day. The prediction was off by many orders of magnitude.

Other implementations

Another implementation was not to display cross-posted articles more than once. The cross-posted articles appeared as a single file linked from multiple directories. This technique was not only helpful to find duplicates but also saved adequate disk space. At that time, disk space was quite expensive.

Few other points worth knowing: Redundancy of a global coordinator as each line had an article ID with the site name, period, and a sequence number. The filenames were the articles’ IDs that had a character limitation. A database may have helped to store the files. But a single database of all news would require a locking mechanism which was hard to achieve on a 7th Edition Unix. Pipes had to be created before the processes, so the file system had to be relied upon. The UI resembled the 7th Edition mail command—it was simple and worked seamlessly for exchanging low-volume mails.

Gets Exclusive Content & Expert Advice

Subscribe to our marketing newsletter to get the latest tips and advice delivered to your inbox each month!

Email Address*

Connect With Us

Featured Posts