E-SPEAIT T3 New Media

From ICO wiki
Jump to navigationJump to search

Back to the course page

From Usenet to Twitter: The (Not So) New Media


The term 'new media' appeared into various sources about at the turn of the century. Different authors and also different research disciplines have thought somewhat different things under the term, but in general it has meant the newer trends in the mass media that are mostly based on IT (especially Internet). Some of the keywords connected to the phenomenon could be

  • on-demand
  • digital
  • interactive
  • inclusive
  • two-way / networked
  • user-generated
  • read/write

Like many other buzzwords, it has from time time been promoted as an universal cure for all problems. Yet it has definitely changed the game on both the media and the technology side, and thus has had definite influence on more general processes in society.

An aspect here could be the possible division of all information on Internet into two categories. First, there is passive information that is "there on the Net" for everyone to access - this includes web pages, public databases, FTP sites etc. This kind of information is typically accessible with various search engines like Google or Bing. But the other kind of information is not accessible for Google - the active information which resides in the heads of people participating in the network communities (in a way, it's the information "not on the Web yet"). Retrieving this kind of information demands active communication with other people and includes various other aspects like netiquette (which will be discussed in detail at another lecture). In a more complicated case, this kind of information collection involves

  • proper etiquette and communication skills to negotiate the knowledge
  • putting the received pieces of knowledge together
  • synthesis of new, personal knowledge

Note: a well-known resource focusing on collecting active information is How to Ask Questions the Smart Way by Eric S. Raymond - this is a strongly recommended reading for nearly everyone communicating over the Internet, as these principles are equally valid in different online channels.

The following is a brief overview of some of the main components of the new media.

Early attempts

It is suggested by several authors that one of the earliest virtual communities was made of telegraph operators in the 19th century - besides forwarding prescribed content, they sometimes chatted and exchanged news in a similar way to today's social media. In early computing, the following projects are known for forming virtual communities:

  • PLATO - The University of Illinois at Urbana-Champaign developed a computing culture distinct from the major centres (of the time) at MIT and Stanford. Built on a series of homegrown mainframe computers, the PLATO system was a birthplace of several major applications, including the Notes message board (launched in 1973) which developed an early user community.
  • Community Memory was a 1973 messaging service stemming from a social research project Resource One at the University of California at Berkeley. The community-owned mainframe was accessed via a public terminal at a local music store, attracting a wider public (e.g. musicians seeking for bands or gigs). It also featured Benway, possibly the first known 'virtual persona'.
  • MUD1, created at the University of Essex by Roy Trubshaw and Richard Bartle, became the father of the whole new online game type and grandfather of today's MMORPGs.
  • EIES was a message board system started at New Jersey Institute of Technology in 1974. Initially meant as a four-year project, it 'refused to die' as participants took it over as a community and extended its life for years.
  • The WELL (Whole Earth 'Lectronic Link) is a commercially operated, low-cost message board that is still operational and considered one of the oldest existing virtual communities. It is notable for having been the first meeting and discussion venue for many Internet dignitaries like Mitch Kapor and John Perry Barlow.

Mailing lists

Perhaps the oldest Internet channel with social software characteristics - the first mailing lists were formed soon after the emergence of e-mail in 1972[1]. Simply put, a mailing list is a software which forwards all the correspondence sent to the list address to its members. Classic mailing list systems included LISTSERV, GNU Mailman and Majordomo - later on, mailing list capabilities were added to various other applications (forums, portals etc).

While there are newer technologies available, the number and diversity of mailing lists is still huge. Different lists have different rules on membership (some are open for everyone, some do a thorough background check), acceptable behaviour, writing style etc. Some lists are newsletter-type ones with members being just passive recipients of information, however in most cases list members do contribute as well. A number of more specific lists are moderated - there are people checking all the submissions before passing it to the list (this is useful with highly controversial topics (e.g. various religious, gender-related, political and historical issues).

Mailing lists, due to their long existence, are also the birthplace of netiquette - the (often informal) rules of good conduct online, which are often enforced by the whole community. Again, one of the reasons for this is that mailing lists often include top-ranking people in their field - thus, one can get advice for free from experts whose 'real-life' consulting session could be unaffordable for many. Misbehaviour interferes with the chance, therefore it may get rather harsh reactions.

There are several reasons for the longevity of the technology - e-mail is by its nature bandwidth-effective (being pure text) and usable on a large variety of platforms without technical and legal limitations (a good comparison is Skype: it needs more bandwidth, needs dedicated client software and has a proprietary communication protocol).

Usenet newsgroups

Usenet (from the original name of Users' Network) is another early service based on E-mail. It was founded by Duke University and at first, was based on the UUCP communication protocol - Usenet servers 'called' to each other periodically and exchanged their messages. Later on in 1986, a separate NNTP protocol was adopted that was more suitable for fixed, broadband connections replacing modems.

Compared to mailing lists, Usenet has become more obsolete due to the competition from other, mostly web-based services (e.g. web forums). Its birthplace at Duke shut down their Usenet service in 2010[2]. While the data volumes there are still significant today, a large share of them are attributed to automatic transfers in binary newsgroups (including some shady content).

Still, Usenet is worth studying - even if for its long history (the Duke server ran for 32 years). The development of netiquette was also influenced by Usenet, and it has been at the birth of very important developments[3]. For example, the Usenet archives contain the message from Tim Berners-Lee about his new WWW protocol[4], or the announcement from a Helsinki University student Linus Torvalds about his new operating system project[5] (later known as Linux). Note: The Google Groups archive needs logging in to Google.

Usenet consists of topical hierarchies, of which the main ones are

  • comp. - computer-related topics
  • misc. - various topic (miscellaneous)
  • news. - topics about Usenet itself (new groups etc)
  • rec. - recreational topics (music, sports)
  • sci. - topics about sciences
  • soc. - topics related to society and culture
  • talk. - various, often controversial topics (politics, religion)
  • humanities. - topics about humanities (philosophy, arts)

The groups go from more general topics to more specific, e.g.

  • comp.os - operating systems
  • comp.os.ms-windows - Microsoft Windows operating systems
  • comp.os.ms-windows.apps - applications for MS Windows

A separate notion should be made on alt. -hierarchy which was created later to allow larger freedom to create new groups - in the classic hierarchies, there were strict requirements on procedure and a certain number of participants were needed to apply. At alt., more or less anyone was entitled to create a newsgroup, making it soon an 'anarchist' part of Usenet. The binary traffic mentioned above is also mostly done at alt.binaries. -subhierarchy.

There were also some smaller, local hierarchies - for a while, the ee. (Estonian) hierarchy was rather active, especially as the Internet connections for Estonian schools were slow and low-quality during the 90s.

Technically, Usenet is a network of news servers. Every server announces the list of groups it supports, and all the traffic for these groups will also be forwarded to it. The user can choose the nearest server to read news - e.g. to download the Usenet message from London even if it was originally posted in Tallinn. Originally, a separate software application (newsreader) was needed, but these functions were later included to most e-mail software. Like in mailing lists, some Usenet groups are moderated to ensure better quality of submissions.

Search engines and Web directories

Search engines are software applications which collect information from ("crawl") the Web, index the content and allow searching by keywords and -phrases. The first systems of this kind appeared around 1993.

  • WWW Wanderer (or just Wanderer) by Matthew Gray was likely the first "web robot" capable of moving along the Web on its own[6]. Yet it did not evolve into a full search engine.
  • AliWeb by Martijn Koster from May 1994 is considered 'the first' by some sources; others argue that while it allowed searching, it did not possess a web robot and indexing was done by hand[7].
  • WebCrawler by Brian Pinkerton is widely considered the first full-text search engine capable of indexing the whole pages (earlier only took a small portion).[7]
  • Lycos was the first search engine which grew into a business. They started with a search engine and added other services later (for example, Yahoo! did it the other way round)
  • Altavista, launched by Digital Research in 1995, was probably the first widely used search engine which also supported different languages. During the last years of the 20st century, until the emergence of Google, it was THE search engine (still, the Web was remarkably smaller back then). in 2004, it was obtained by Yahoo!
  • Yahoo! was the first large-scale business built around a web directory. Starting out as a simple link collection of two students (Jerry's Guide to the World Wide Web) it became a large integrator of different services. Regardless of various problems, it has survived and as of 2021, retains about 1.5% share on the search engine market [8].
  • Google grew out of a research project of two Stanford students, turned into a company in 1998 and soon took over the market. The current global market share (January 2021) is over 91%[8].
  • Bing grew out of search features in Microsoft system and went public in 2009. The current market share is around 2.5%[8].
  • Baidu is a Chinese search engine especially promoted as a substitute for Google after they pulled out of China. While it is targeting mostly home (lately also Japanese) market, its global market share is a little over 1%.[8]
  • DuckDuckGo is an American search engine that claims to be 'privacy-conscious' and not tracking users. It has become the default seach engine in many Linux distributions, but its mainstream market share is still under 1% [8].

Note that most large search engines are backed by commercial ventures and their exact working mechanisms are considered trade secrets. In general, their main component is the web robot (also crawler or spider) that moves through the web and searches for links. The collected information is processed in a database, the main bases of indexing are titles and keywords. Compared to the early years, the effectiveness of search engines has somewhat declined due to the web growing very fast, pages changing rapidly and the growing share of dynamic pages which are read from databases and put together on-demand, at the request of the reader.

A problem with search engines is the potential outer influence on the information. Sometimes a visibility of a web page can depend on payment to the search company (as most of these companies have advertisement as a major source of income). Sometimes, censorship will interfere with search results. And the art called search engine optimization or SEO is a multi-layered discipline which has both 'light' and 'dark' components[9]

A remarkable case of search engine abuse were the Google bombs especially popular during the last decade. Technically, these were achieved by a sufficient number of participants (web page or blog owners) who wrote the search word or phrase on their page and linked it to the 'victim'. Likely the most famous one was the 'miserable failure' linking to George W. Bush, but there were some popular examples elsewhere as well (in Estonia, two less liked political parties used to be targeted with dismal and terrible, and a Mayor of Tallinn was connected to several less than flattering epithets).

It is also easy to ban search engines from one's web page (a common method is via the .htaccess file). Considering all this, the effectiveness of the web search has gone down. A solution is proposed in the form of semantic web, where the essence and meaning of web content is included with each page.

Web forums

Web forums are descendants of earlier bulletin board and message board systems from the dial-up age. They allow rapid exchange of information but unlike in real-time chat or instant messaging, the information is usually archived for later use. Like newsgroups and mailing lists, forums can be moderated - and unlike in the former, reactive moderation (purging the posted content that is deemed unacceptable) is also possible (if done fast enough for e.g. Google Cache or the Internet Archive having no time to react).

Some forums can also act as newsgroups or mailing lists (depending on the user settings) - e.g.Google Groups (Yahoo! Groups was discontinued in 2020).

Online journalism

The first reflection of traditional media in Internet were probably the databases of journal and newspaper articles (an example was the New York Times Information Bank that collected abstracts of newspaper stories[10]). In 1971, the University of Illinois started to collect electronic texts into their mainframe - the initiative grew into Project Gutenberg which today houses thousands of e-texts (more recently also e-books in various formats) from ancient times to early 20th century (most of the newer text are still under the current system of copyright).

The 1980s were the period of widespread commercialisation attempts for news - in many places, they were countered by Free-Net (not to be confused with the Freenet), community operated public terminals offering inexpensive or free access to some Internet services as well as other news sources (thus being also an early example of social networks).

The emergence of the Web in 1991 changed the game for the media as well. Even in the US, many media outlets tried to stick with commercial, controlled network providers (AOL, Prodigy, CompuServe), the news quickly became a commodity. On the other hand, this caused the still-continuing war between the censorship and free speech (CDA, COPA, CIPA) as well as the 'intellectual property' and security-related legislation strongly promoting censorship (DMCA, SOPA, PIPA, ACTA, CISPA and others).

The new century has brought along many attempts to strike the balance point between commercial interest and free distribution, especially in the context of new portable devices (especially smartphones and tablets).

Real-time chat

This group of tools overlaps somewhat with instant messaging discussed below, consisting of 'text phone'-type solutions.

Talk was the communication tool in Unix system (early versions were developed already in the 70s), allowing an instant messaging -like text chat session between two users logged on the same system. In some sense, the communication was even more immediate than in the messengers of today, as each character was immediately visible to the other side. Available also today (e.g. in the Ubuntu repository), it has been mostly obsoleted by more modern solution (partially also due to recurring issues with security).

Talkers were communication systems which can be considered an early form of virtual world (Estonia had a talker boom in the 90s - almost every school, university and company had one). A talker features the 'world' consisting of different 'rooms' for people to move between and chat. Usually, the world is shaped as a set of location with an unified theme - an apartment with rooms, or a city with city hall, parks and other places etc. The main point is communication between people, which happens in three levels: private ('tell'), local/room-based ('say') and global ('shout'). Earlier (some say classic) talkers were used over Telnet, later ones also had a web interface.

In some places (Estonia being one of them) talkers helped to form strong communities whose members also communicated IRL ('In Real Life'). An interesting Estonian case was the Old Town talker which was presided over (and also maintained) by a man with a profound multiple disability (including inability to speak) - yet his personal charisma was such that the people who gotarf to know him online had no problems with communicating with him in person as well, with an enlightening experience for both sides. Yet there were negative examples as well, with cases of cyberbullying and online harassment.

IRC has common features with a number of other services. Technically and historically, it is the successor of Unix Talk, but allows various communication channels (which can be seen as talkers) and simultaneous communication. Topically, it is similar to Usenet and mailing lists by having channels for set topics. IRC does not require user registration like most talkers do, thus being more anonymous - on the other hand, the topical restriction helps to prevent going chaotic. Still, compared to newsgroups or lists, IRC channels usually are less formal in style.

Just as Usenet, IRC archives contain a lot of historically interesting material, e.g. recordings of the discussions taking place during the break-up of the Soviet Union[11].

Multi-user online games

MUD (Multi-User Dimension or Dungeon) is a multi-user, multi-room environment similar to talker (also a text-based virtual world accessible via Telnet), but in addition features complex mechanics for a (typically role-playing) game with puzzles, computer-controlled characters and various objects. The first MUDs appeared at the end of the 70s[12]. Initially, the tabletop role-playing game of Dungeons and Dragons had strong influence of MUDs, the computer just replacing the game manager (known as the Dungeon Master).

While most MUDs tend to follow Tolkien-like high fantasy theme, there are also some with sci-fi or other themes. Players typical choose a race and/or a profession for their characters and gather experience during the game, making them more powerful. Two main categories of MUDs can be distinguished:

  • Action-oriented - the goal is to develop one's character by gathering powerful objects and achieving higher levels - in this sense, they are similar to mainstream adventure games. Some MUDs of this type only allow fighting computer-generated enemies, while in others, fighting other players is allowed as well (sometimes with restrictions like having designated areas or clearly defined sides of war).
  • Interaction-oriented - these MUDs are more similar to talkers, adding some kind of roleplay to the communication. The action component is unimportant or sometimes missing altogether. Tha main point is being 'in character'. These environments are sometimes named MUSH (Multi-User Shared Hallucination), MUSE (Multi-User Simulated environment) or MUX (Multi-User Experience) instead.

More recently, MUDs have been contested by the [MMORPG] (Massively Multiplayer Online Role-Playing Game). As computers became more powerful and broadband connections faster, creating fully graphical multi-user worlds became feasible. Probably the most popular example of this type of games is the [World of Warcraft] by Blizzard. However, text-based MUDs have not disappeared yet - an example is the MUME (Multi-User Middle Earth) which is still running (founded in 1992), has several hundreds of players daily and features a large share of Tolkien's Middle Earth consisting of about 27 000 rooms[13].

Instant messaging

The software in this category brings together traditional e-mail, talker-style real-time chat and mobile messaging, more recently also merging in voice and video chat and video messages (the best example is probably Skype which, while starting out as an internet telephony application, has since moved towards the IM realm). A distinct feature here is the application of the contact list containing other people that are authorised by the user to contact him/her (thus adding a kind of threshold to the communication).

The messengers in its current form appeared in the mid-90s (ICQ, AOL Messenger). They were later followed by Yahoo! and Microsoft, whose version is probably one of the most popular (due to its integration with MS Windows). While at first all the protocols were proprietary making messengers unable to interact, later developments have strived towards some interoperability (examples include multi-platform clients like Trillian and Pidgin, also the emergence of the open-source XMPP/Jabber protocol).

News feeds

News feeds (web feeds) are data distribution mechanisms for rapidly changing online content. The feeds are then read by either other websites (for intersite communication) or aggregated to newsreaders by end users. The feeds allow users to create topical collections of websites of interest (e.g. blogs that discuss computer security) and also to follow much larger volumes of web traffic that would be possible by just browsing.

The first proposed standard for news feeds was the RDF that was developed by Netscape and accepted by the W3C in 1997. In 2000, RSS was developed and in 2003, Atom branched out of it.

News feeds can be read by a wide variety of software (usually called Aggregators) - there are standalone applications as well as readers built into other applications, plus web-based aggregators like Google Reader.

News feeds are an important compontent of blogs, allowing readers to follow a large volume of blogs without having to manually surf them (many blogs are written irregularly) and receiving a notice only when the blog is updated.


Blogs are diary-like websites which display their main content (called 'posts') in reverse chronological order and allow for easy adding and editing of new material. For many, a blog must also allow comments - every post can be commented on by users (regulating commenting is up to the owner). Many blogs also display a collection of web links.

Blogs have many predecessors, from ancient chronicles to personal diaries to the first website created by Tim Berners-Lee (which was a blog of sorts, describing the development of the new technology[14]). Among well-known early "Internet celebrities" were Justin Hall and Jerry Pournelle. The term itself came into use by Peter Merholz in 1999, when he split the word 'weblog' in an unorthodox manner, getting 'we blog'[15].

The popularity of blogs is due to various reasons, some of them being

  • availability of cheap hardware which in combination with inexpensive broadband connections and free software make inexpensive web servers
  • availability of a wide variety of free and open-source software, plus a number of well-established web-based platforms (blogger.com, wordpress.com)
  • availability of good syndication mechanisms (RSS and others) allowing mainstream media to pick up interesting stories from blogs
  • actual perceived attempts to suppress free speech, giving rise to alternative media

The spectrum of blogs is extremely wide, ranging from personal diaries to scientific blogs of large research groups and corporate blogs of large multinationals; topics go for stupid jokes to top levels business, politics, media and science etc. In some countries, blogging has largely risen to replace the official media which is seen as controlled by the government (e.g. Iran).


Traditional web pages were one-directional - the author wrote them for others to read. In 1995 however, Ward Cunningham started to experiment in Portland with a web page 'that anyone could edit' and named it WikiWikiWeb. The name was picked up from the Honolulu Airport transfer bus of the same name, meaning 'quick' in Hawaiian[16]. His idea was to make both editing and reversing edits as simple as possible, allowing other users to revert undesirable edits easily. The principle has been the foundation of wikis since then.

While the most famous wiki is probably Wikipedia, the technology has spread widely and nowadays is in use in very different contexts from education (https://en.wikipedia.org/wiki/Wikiversity Wikiversity) and business (many enterprises use internal wikis as Intranet solutions[17]) to crazy humour (Uncyclopedia). Wikis are also used as a part of software development environments such as Trac and some tools by Atlassian.

Web-based social networks

While most readers probably think of Facebook here, even this kind of networking has a lot of early predecessors. As seen above, some of these functions existed already in mailing lists and Usenet, in the web we could recall services like Tripod and Geocities which allowed users to create web pages with little effort and thus link with others, forming online communities. Numerous web services from late 90s allowed users to create their own personal profile. Of the modern social networks, the first was Friendster in 2002, it was followed by MySpace ja LinkedIn in 2003 and Orkut and Facebook in 2004.

The main factor in web-based social networks is the multitude of possibilities. While Facebook can be successfully used as a 'virtual card' of a person (he or she is 'just there' without much participation), it can function as a blog, gallery, forum, advertisement engine, gaming platform and much more. These networks are increasingly being used in education, research, politics (the so-called 'Facebook revolutions' and also various protests worldwide) and business. On the darker side, there are increasing concerns about surveillance and threats to privacy (but will be discussed in another lecture).

Web 2.0

While the term has its supporters as well as critics, most authors agree that it encompasses the following main components:

  • The network (web) as a platform - all work is done in the browser which would replace the whole set of earlier applications ([https://en.wikipedia.org/wiki/Google_Docs Google Docs is a good example). Thus the web is largely acting as an operating system.
  • Dynamics, constant development - all the content is in motion. Examples range from Wikipedia and YouTube to blogs, forums and personal wikis.
  • Inclusion and community - content creation is a community activity that everyone can take part in.
  • Technology: XHTML, CSS, LAMP, AJAX, tagging, blogs, RSS, wikis.

Social Software and Free Culture

While free and open-source software has already a longer history, the New Media boom gave rise to new kinds of free content which were not software (research papers, e-learning materials, works of art etc). As the earlier concept of 'intellectual property' ran into growing problems (the skyrocketing illegal copying was a factor but not the only one), more creative people came to the conclusion that in order to profit from one's work, keeping it under total control may not be the optimal way.

Note: the following is a very brief overview, the topic will have a separate lecture.

Creative Commons

In 2001, an US lawyer and writer Lawrence Lessig founded a new initiative to study new ways to regulate creative content on the Net. He sook to find a 'middle road' between public domain and earlier strict copyright - the result was the Creative Commons family of licenses using the motto 'some rights reserved'.

Open Access

Research has always been built on earlier knowledge - every scientific achievement is standing on the results of predecessors and all works of research demand thorough study of earlier works. Therefore the availability of science has always been important - through public libraries and universities, but recently also through the Net. Yet by the end of the 20th century, scientific publication had gradually turned into a lucrative business for a few, while many less-funded researchers were unable to learn the newest results. The problem was largely in the skewed business model - publicly funded researchers submitted their results to publishers for free, the results were published in journals, which then were sold back to the scientists.

A solution was proposed in the form of Open Access which is gradually gaining traction in the scientific community.


Internet is a diverse phenomenon and useful knowledge can be obtained in a variety of ways. The inclusive nature of new media will probably lead to new community-based application (e.g. community-based radio and TV channels). Yet the skills of obtaining and sorting out information (both active and passive) remain very important.


Some links

Additional reading

Study & Write

Pick one of the new media components described above and write a short blog analysis how it has influenced its area (e.g. wikis to encyclopedias, or blogs to written media).

Back to the course page

The content of this course is distributed under the Creative Commons Attibution-ShareAlike 3.0 Estonian license (English: CC Attribution-ShareAlike, or CC BY-SA) or any newer version of the license.