Theory


1 2 3 4 5 6


History of the internet: What could have been


The internet as you might expect is an expansive and everchanging technology, because of this it is impossible to credit a single person with the invention of the internet. There were many individuals that worked on the internet that resulted in the “information superhighway” it is today (Andrews, 2013). These individuals range from pioneering scientists, programmers, and engineers, each adding new features and aspects to the internet that made it what it is (Andrews, 2013).

When looking at the history of the internet, Vannevar Bush is the name we come across most often. In 1945 Vannevar Bush published an article in the Atlantic Monthly about memory extension through a photo-electric-mechanical device that he named the Memex. The Memex was capable of making and following links between document of microfiche (a flat piece of film containing microphotographs of the pages of a newspaper, catalogue, or other document) (Connolly, 2000). In this blog I want to cover a name that does not appear in the history of the internet as much as it should, ‘Nikola Tesla’.

When we think about Nikola Tesla, we possibly think about Tesla the electric vehicle and clean energy company which was named after him by the owner and CEO, Elon Musk. Other than that, you might know Nikola Tesla as a renowned genius inventor who is credited with making the electric age possible through the invention of the modern AC transmission. What you possibly do not know is that Nikola Tesla played with the idea of a world wireless system in the 1900s. In a brochure Nikola Tesla described his plans for the “world system of wireless” as : “the instantaneous and precise wireless transmission of any kind of signals, messages or characters, to all parts of the world.” Also, “... an inexpensive receiver, not bigger than a watch, will enable him (a user) to listen anywhere, on land or sea, to a speech delivered, or music played in some other place, however distant” (Bradford, 1999, p28). Of course, in our current era we far surpassed this, but it is more than 100 years later. In 1901 he began building the transmitting station for his “world system of wireless” on Long Island New York. Unfortunately, he ran out of money before he could complete it and his possibly revolutionary ideas were never put to the test.

Nikola Tesla believed that energy could be transferred efficiently from a transmitter to a resonant receiver anywhere on the globe and that earthly structures like mountains or buildings would absorb little of this energy because they would be non-resonant. He also considered utilizing electrical conduction through rarefied strata in the upper atmosphere, an experiment that is now reminiscent of Mahlon Loomis (Bradford, 1999). He conjectured that a transmitter would produce an artificial aurora for this purpose. Unfortunately, he also lacked access to equipment such as balloons to gain access to the upper atmosphere, thus remained focused on conduction through the earth. Tesla was not the first to try signaling through the earth electrically but unlike the others he did not plan to use complete circuits to pass current between two ground terminals (Bradford, 1999). “In Tesla’s system, the alternating e.m.f. supplied by the transmitter was a carrier, and information was to be transmitted on it via amplitude modulation. His plans included modulations carrying telegraphic, telephonic, stock market, picture, time, and coded signals. How well his proposed electromechanical methods of modulation would have worked is debatable, but his ideas certainly were ahead of their time” (Bradford, 1999, p30). The experiments that were conducted were only conducted within a few miles radius and apparently Tesla did not attempt to observe how far the signal could be received. The greatest distance recorded was thirty miles, undoubtedly it could have far surpassed this. Even so this was a significant accomplishment, but Tesla did not want to exploit it commercially and preferred to wait until he achieved his goal. This was the greatest mistake, once he ran out of money and could not get funding, the project came to a halt before the building got demolished and project got scrapped. If Tesla decided to commercialize it, he could have completed the project and would have earned a prominent place in the history of wireless, which I account to the internet (Bradford, 1999).

One significant question remains “would wireless have evolved along entirely different lines if Tesla had successfully completed his experiment?”


Reference: Andrews, E. (2013). Who Invented the Internet?. Retrieved 12 November 2020, from https://www.history.com/ news/who-invented-the-internet Bradford, H. (1999). Tesla's Dream: The World System of Wireless - Part 1 from the Tesla Universe Article Collection. Retrieved 12 November 2020, from https://teslauniverse .com/nikola-tesla/ articles/teslas-dream-world -system-wireless-part-1 Connolly, D. (2000). A Little History of the World Wide Web. Retrieved 12 November 2020, from https://www.w3. org/History.html



Semantic Markup - What is it and Why is it necessary.


The way we go about writing the structure of our HTML (Hypertext Markup Language) file is an important factor to consider. Semantics refers to the correct interpretation of the meaning of a word or sentence. Semantic Markup refers to a specific way of going about this structure that reinforces the meaning of the content over the appearance of the content. An instance of good practice is separating the presentation of content from the content ("Semantic HTML for Meaningful Webpages", 2020). This is done by having the file structure and content as part of your HTML and having the presentation of that content part of your CSS. Semantic Markup is built to facilitate accessibility which should be at the heart of the HTML design but is useless if not implemented correctly. We should use HTML elements the way they are supposed to be used. To begin writing with Semantic Markup we first must understand the hierarchy of our content. We must consider how the user will read it and how the machine will read it. The purpose of this is to make it easier for browsers, developers, and web crawlers to distinguish between different types of data. The semantic tags used such as "section", "article", "footer", "nav", and "aside" make it clear to the browsers, developers, and web crawlers what information on the webpage is important. ("Semantic HTML for Meaningful Webpages", 2020). Using "div" does not provide information to the search engines and will hinder the accessibility of your webpage. Instead of your webpage being displayed on the first page of google, it will turn up on the 10th page. This is because web crawlers cannot read the content, so even if your webpage is the most relevant to what is being searched it would never be accessed ("Semantic HTML for Meaningful Webpages", 2020). Semantic Markup "article" for instance provides a clear section and a logical flow to the hierarchy, contrary to this, "div" does not, rather it’s sections are ambiguous and there is no hierarchy("Semantic HTML for Meaningful Webpages", 2020). The 2 overarching factors of using semantic mark-up is accessibility and maintenance. As web applications grow rich and creative, they become less accessible to people with disabilities, semantic mark-up is used to ensure equal access to people with disabilities. This also benefits people without disabilities by allowing them to customize their experience ("Semantic HTML for Meaningful Webpages", 2020). 2nd, Semantic Markup results in clear standardize code which allows developers to save time in the long run. Clear code is easy to maintain and allows for a different developer to work on the code without confusion.


Semantic HTML for Meaningful Webpages. (2020). Retrieved 21 October 2020, from https://seekbrevity.com/ semantic-markup-important-web-design.



The Relevance of Vannevar Bush’s theory “As We May Think” to the design of the User Interface in computer technologies.


Humans understand things through their connection to other things. The word chair has no meaning without its connection to the physical object, this is the abstraction of information that is produced through a link between the signifier, which is the term or symbol that holds the meaning (word “chair”), and the signified, the meaning (physical “chair” (physical or conceptual)). If we think about a “chair” our mind also trails to things that we associate with it, like the texture, style or desk that we can relate to a “chair”, which has no limits.

If our mind drifted off to another concept, trails associated with “chair” would fade. Vannevar Bush was concerned with this associative trail of thought, and how it stores and links relevant information to each other, like connotations of a word. His goal was to replicate this associative thought in physical space. In 1945 Vannevar Bush published an article in the Atlantic Monthly about memory extension through a photo-electric-mechanical device that he named the Memex. The Memex was capable of making and following links between document of microfiche (a flat piece of film containing microphotographs of the pages of a newspaper, catalogue, or other document) (Connolly, 2000). If we follow his thinking and apply it to the interface of a computer we can see that human cognitive and interactive processes are far superior to that of the interactions and presentation of data produced by the computer. In that case the computer should adapt to accommodate human needs not the other way around (Harper, 2010).

The reason why I am linking Vannevar Bushes theory is because 70+ years later and his theory is more relevant today than it ever was. Today we see his theory in practice when we search the web, for instance you search “bleached hair” on the web, not only will there be images related to “bleached hair” but their will be video tutorials on how to bleach your hair, studies on how bleach damages hair and how to make your bleached hair healthy again, you get the point(Harper, 2010). This algorithm follows a similar concept to what Vannevar Bush theorized how a computer should adapt to the needs of the human. Similarly, Simon Harper who wrote an article on Vannevar Bushes theory titled “As We May Think at 65” states that, “the human mind operates by association and as the mind grasps onto one item it naps to the next that is suggested by association of thought in accordance with some intricate web of trails”(Harper, 2010). Keeping that in mind, algorithms that are now developed for the web are trying to replicate this cognitive flow by categorizing information and linking them by any connections or relevance to each other(Harper, 2010).The future of the web will succeed the theory presented by Vannevar Bush and web trails of relevant information as effective and the human mind itself.


Harper, S. (2010). 'As We May Think' at 65. ACM SIGWEB Newsletter, (Spring), 1-3. doi: 10.1145/1721871.1721872



Hyperlinks


With the spike in popularity of the world wide web there has been an increase of awareness in the powerful, malleable hyperlink networks. Previously, views on hyperlinks barely glimpsed the extent to which they can express meaning. Theoretically hyperlinks reflect deep social and cultural structures(Tsui & Turow, 2008). The newfound understanding has changed the way hyperlinks are used and exploited. Since hyperlinks are easily manipulated and the use and understanding 10 years ago is different to the use and understanding today, they will leak into our social lives. This in turn means that hyperlinks need to be understood as far more then linking an article to your webpage. When we think of hyperlinks the first idea of a hyperlink is a link or reference word/image that when clicked-on automatically takes us to a particular webpage that has associated information to the link.

We cannot identify the origin of the hyperlink, the reason being is that hyperlinks are simple enough to be applicable to a spectrum of technologies(Tsui & Turow, 2008). Hyperlinks are not limited to a particular medium, they are simply automatic citation devices, but there are precursors to what we generally think of as a hyperlink. Today, documents are not always in a digital format and definitely were not in a digital format back in the day. Prior to hyperlinks, quotations and citations can be argued to have a similar function to what hyperlinks were thought to be, of course this has changed as previously mentioned. Despite this change, there will always be a relationship between a traditional citation and the development of the hyperlink(Tsui & Turow, 2008). The function of a citation on the surface appears obvious but it is not as obvious as it may appear. We generally think of a citation as presenting other ideas, often as support for our own argument, on top of this it is also a way of “assuming access to a generalized pool of authoritative text and avoid recapitulating the development of an entire field” (Tsui & Turow, 2008, p3). In this case it serves a role of pointing a reader to helpful resources that coincide with the topic at hand(Tsui & Turow, 2008). Other reasons for linking previous work: criticize, analyze, build, or refute that work, without effective citations scholarly work would remain solitary work rather than the textual conversation it is. Due to this, ‘citation’ is acknowledged to be “hyperconversation” as it occurs across both context and time.

The term hyperlink and hypertext are terms coined by Nelson, who set out to create a broadly associative way of organizing knowledge in his project Xanadu(Tsui & Turow, 2008). The problem was that the hyperlinks were used in a unidirectional way and was unable to reflect the rich associated thought. With the introduction of the world wide web the significance has expanded and changed, the role of the link is beyond citation and if now set on navigational issues. “Clicking a hyperlink may lead to a camera changing its orientation, to a book being ordered and sent through the mail, to an e-mail in-box being reorganized, or to a closer view of a satellite image. These potential uses were not outside of the expectations of some of the originators of the hyperlink” (Tsui & Turow, 2008, p6). As a result, hyperlinks provide an opportunity to understand social behaviour and the important role in the development of social knowledge. Hyperlinks can be best understood within the context of a hyperlink network. This is because hyperlinks alone cannot cause a shift, but the network used with the hyperlink allows the shift. It is like clicking on a YouTube video with no internet connection, “the video will not play”. Hyperlinks at its core is essentially text. The only difference occurs in that they can be interpreted as a kind of control language, providing organization, coordination and structure. “Hyperlinks form the basis for this learning, adaptive, self-aware social system” (Tsui & Turow, 2008, p15). If we look at the time lapse of the hyperlink, we will see the ever-increasing role it plays in our interaction(Tsui & Turow, 2008).

When looking at hyperlinks we see a similarity between the human thought process and the structure of hyperlinks. When we draw upon information in our mind associated information is drawn upon as well, its like hyperlinks within our mind. For instance, if we see a moving car our mind trails information surrounding the car. It may be the velocity or colour and will vary from person to person, but the principle is the same. We are linking aspects of the car together just like how you would reference or link associated information to a web article. Its like clicking on a link that says, “red car” and an image of that moving car appearing or vice versa. This associative thought process is what Vannevar Bush tried to replicate in 1945, which he called the Memex (Bush, 1945). Vannevar Bush Memex and hyperlink are one in the same, with usability expanding that of what was Vannevar Bush presented by trying to physicalize the associated through process (Bush, 1945).


Bush, V. (1945). As We May Think. Retrieved 12 November 2020, from https://www.theatlantic.com/ magazine/archive/1945/07/ as-we-may-think/303881/
Tsui, L., & Turow, J. (2008). The Hyperlinked Society. Retrieved 21 October 2020, from https://www.goodreads.com /book/show/3148408-the- hyperlinked-society



UX & UI


The design of the user interface requires metaphors, which are essential terms and concepts of representing data and functions. Designing advanced user interfaces requires the designer to build on already set conventions with awareness of semiotic principles to achieve more effective ways to communicate across diverse user communities. This is an essential part to communicate across a diverse user community because aspects such as culture, graphic design, information design, semantics and semiotics, all play a role in how the interface is perceived and read across diverse communities. For an ever more diverse range of users the user interface needs to clearly portray the content being displayed on a computer, for instance. A basis for the products usability and commercial success should be embodied by the metaphor of design (Marcus, 1998). Therefore, it is important that the metaphor of design of the user interface caters to help novice users become quickly proficient and eventually an expert user without the use of training aids, which were initially used for the novice user (Marcus, 1998). This should be accomplished through systematic design of visual information that allows for sophisticated computer-based products that cater for both domestic and an international business market. No matter how sophisticated the technology or lack thereof, there are five aspects of user interface design that must be optimized to meet the needs and preferences of the user. This includes mental models, navigation, presentation, interaction, and metaphors (Marcus, 1998).

“Metaphors are the fundamental concepts, terms, and images by which information is easily recognized, understood, and remembered” (Marcus, 1998, p2). Metaphors achieve their effectiveness through association of organization or operation (Marcus, 1998). Even on a computer interface there are still metaphors that relate to physical objects that we have already built up conventions for, like: folders and documents on a computer. The icons resemble that of a physical folder, and this is just an apparent example. We come across this in our daily lives, an example of this is a push-in trashcan. The first time using it I knew exactly how to use it without observing anyone else using it, or without really thinking about it (Marcus, 1998). Just by looking at the bin I knew what I needed to do as it communicated its operations to me.

This takes me onto user experience. The user experience is directly affected by the user interface. With a clear communicative user interface comes a satisfactory user experience. Metaphors that encompass a good user experience is making the use obvious through design. If this is not possible, make it possible through discoverability (Marcus, 1998). Discoverability means that when you look at something and try use it, like a touchpad on a laptop, it should convey the possibilities of what operations can be done (Mars, 2016). This theory is relevant to everything that we can interact with. When looking at something it is not always possible to know what operations can be done or how to even use it. Going back to the touch pad on a laptop, to use it, should you tap it once? twice? Or operate it with two fingers. This brings me onto my last point, “feedback”. Feedback communicates to the user that a change has occurred, the changes can be physical, visual, or auditory. From this feedback the user can associate the operation to the outcome (Mars, 2016).


References
Marcus, A., 1998. Metaphor Design For User Interfaces. [online] Available at: [Accessed 13 November 2020].
Mars, R., 2016. It's Not You. Bad Doors Are Everywhere.. [online] YouTube. Available at: [Accessed 13 November 2020].



Digital Inequalities


Digital inequalities have long piqued the interest of sociologists. Since the early adoption of the internet, many have investigated issues relating to internet access, skills, uses and outcomes. In Christoph Lutz’s study on “Digital inequalities in the age of artificial intelligence and big data”, he found that digital inequalities tended to mirror the already existing social inequalities (Lutz, 2019). All social inequalities of socio-economic status, education, gender, age, geographic location, employment status and race are concurrent in digital inequalities. Those who are disadvantaged citizens are disadvantaged on the internet by having limited access to technology and this results in them lacking important digital skills. In turn disadvantaged citizens do not reap the benefits of information technology to the same extent that privileged citizens do (Lutz, 2019). As a result, the belief that the “Internet would create widespread social mobility” is been challenged by digital inequalities scholars. Lutz’s study breaks down digital inequality in three levels, first, second and third digital divide. The distinction between these divides emerged over two decades of research into digital inequalities (Lutz, 2019). First-level divide is unequal internet access, second-level divide is digital skills and technology uses, third-level divide is outcomes that are beneficial or detrimental. Digital inequalities in the discourse of emerging technology is overhead by disciplines like computer science, law, philosophy, and human computer interaction. There is an underrepresentation of digital inequality researchers (Lutz, 2019).


The first level of digital divide considers the gap between those who do and those who do not have access to new forms of technology. Although this unequal access to the internet is getting negated as time passes, there is still a large digital divide that mirrors the global inequalities. Although mobile internet access is widely accessible around the world to most communities, mobile internet access is regarded as second-class access when comparing it to its traditional computer counterpart (Lutz, 2019). This is simply because mobile internet offers low functionality in terms of speed, memory and storage capacity. There are also limitations because of screen size and keyboard usability, this makes editing documents on mobile inefficient. Access to internet via mobile is considered extractive instead of immersive, this is because it is centred on more superficial use such as browsing, entertainment and socializing (Lutz, 2019). Another issue we run into is the lack of mobile-friendly versions of traditional websites which are not designed to be navigated on mobile devices, they are cumbersome at best and impossible at worst.


The second level of digital divide “inequalities in skills and uses” (Lutz, 2019). In an in-depth survey done on 1,498 British internet users, they identified 10 types of internet use that differentiated between age, gender an education (Lutz, 2019). Young, educated males commonly have the highest use for most of the 10 types. Across most studies, age is proven to be the most influential factor in online participation and social media use, with some platforms clearly gendered (Lutz, 2019). With the rapid increase in development of the sharing economy, platform economy, or gig economy, digital forms of work have become more common. With the emergence of new forms of digital work in the digital economy, new forms of challenges present itself. An example of this challenge is control, surveillance, and collective action. In this case new studies need to cover the emerging hierarchies, especially in new skills such as algorithmic literacy (Lutz, 2019).

The third level of digital divide “outcomes of internet use” (Lutz, 2019). It refers to the difference in gains from internet use, “where access and use patterns are roughly similar” (Lutz, 2019). The reason this is relevant is because it relates to the gap in individual’s capacity to translate their “internet access and use into favourable offline outcomes” (Lutz, 2019). “Individuals profit disproportionally from Internet use and can leverage these benefits to strengthen their social position, thus exacerbating existing social inequalities. The dynamic is described by a Matthew effect, where the rich get richer and use technology to strengthen their position in society” (Lutz, 2019, p144). An important point to note is that the third digital divide not only studies the benefits of internet use but also includes the harms of it. It is shown that disadvantaged populations suffer the most from large scale surveillance, this is based on digital traces. The reason for this is because they are more likely to fall victim to fraudulent offers or predatory websites. It is clear that there is a shift of research attention from the first digital divide to the third digital divide.



References
Lutz, C., 2019. Digital inequalities in the age of artificial intelligence and big data. Human Behavior and Emerging Technologies, 1(2), pp.141-148.