E-SPEAIT T15 Ethics: Difference between revisions
Kaido.kikkas (talk | contribs) Created page with "=== Ain't good and bad the same as in times of old? === Ethic in its classic sense describes the rules and standards that regulate the behaviour of an individual towards othe..." |
Kaido.kikkas (talk | contribs) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=== Ain't good and bad the same as in times of old? | __NOTOC__ | ||
[[E-SPEAIT | Back to the course page]] | |||
= IT and Ethics = | |||
== Ain't good and bad the same as in times of old? == | |||
Ethic in its classic sense describes the rules and standards that regulate the behaviour of an individual towards others. On the one hand, most 'golden rules' do apply in the age of Internet as well. On the other hand, it has brought along a row of new ethical questions, some of which have made it necessary to reconsider old ideas (everything about 'intellectual property' is a good example). Let us imagine meeting Socrates or any other classical sage and asking questions from him - he would answer those dealing with his known world easily. But what about his opinion about things like spam, trolling or 'letters from a Nigerian prince'? | Ethic in its classic sense describes the rules and standards that regulate the behaviour of an individual towards others. On the one hand, most 'golden rules' do apply in the age of Internet as well. On the other hand, it has brought along a row of new ethical questions, some of which have made it necessary to reconsider old ideas (everything about 'intellectual property' is a good example). Let us imagine meeting Socrates or any other classical sage and asking questions from him - he would answer those dealing with his known world easily. But what about his opinion about things like spam, trolling or 'letters from a Nigerian prince'? | ||
Line 8: | Line 14: | ||
== Different points of view == | |||
Herman Tavani in his ''Ethics and Technology'' talks about ''cyberethics'', as the terms of ''computer ethics'' and even ''internet ethics'' would not show enough the human(s) behind the technology. According to him, the cyberethics can have different points of view, for instance | Herman Tavani in his ''Ethics and Technology'' talks about ''cyberethics'', as the terms of ''computer ethics'' and even ''internet ethics'' would not show enough the human(s) behind the technology. According to him, the cyberethics can have different points of view, for instance | ||
Line 23: | Line 29: | ||
== Some ethical theories == | |||
Michael J. Quinn in his ''Ethics for the Information Age'' has listed different ethical theories that might be used in an information society: | Michael J. Quinn in his ''Ethics for the Information Age'' has listed different ethical theories that might be used in an information society: | ||
Line 43: | Line 49: | ||
In his ''Cypherpunk Ethics'', Patrick D. Anderson adds some other approaches stemming from the [https://en.wikipedia.org/wiki/Cypherpunk Cypherpunk movement]. Note: while the actual movement started in the early 1990s, its roots go back to the work of [https://en.wikipedia.org/wiki/Whitfield_Diffie Whitfield Diffie] and [https://en.wikipedia.org/wiki/Martin_Hellman Martin Hellman] on [https://en.wikipedia.org/wiki/Public-key_cryptography public-key cryptography] in the 1970-s. | |||
The first ethical approach was the original ''crypto-anarchy'' of [https://en.wikipedia.org/wiki/Timothy_C._May Timothy C. May], [https://en.wikipedia.org/wiki/Eric_Hughes_(cypherpunk) Eric Hughes] and others, but also the [https://en.wikipedia.org/wiki/Anarcho-capitalism anarcho-capitalist] ideas of [https://en.wikipedia.org/wiki/Eric_Hughes_(cypherpunk) David Friedman]. This school of thought stresses the “Leave me alone” enabled by strong cryptography, decentralization and freedom as central tenets. Crypto is used as if a high-tech lock against government surveillance and power projection. | |||
Another approach outlined by Anderson is the ''crypto-justice'' of [https://en.wikipedia.org/wiki/Assange Julian Assange]. It is a kind of crypto-backed virtue ethics striving towards a “more just society” (connections also to social contract and Rawls’ theories), e.g. crypto enabled WikiLeaks. Here, crypto is used more actively as a disruptor of power relations (“Privacy for the weak, transparency for the powerful”). | |||
== Main fields of discussion == | |||
The previous theories have become the most contested in three large technology-related fields: rewarding creativity (e.g. copyright and other similar issues), privacy, and censorship. While the latter two have acquired new dimensions during the IT era, the first one faces perhaps the most radical changes of the three. Additionally, the digital gap (where some parts of the world are online and others are not, leaving the latter ones disadvantaged), information security, and social media (journalism is not limited to professional journalists any more) all pose new ethical challenges. | The previous theories have become the most contested in three large technology-related fields: rewarding creativity (e.g. copyright and other similar issues), privacy, and censorship. While the latter two have acquired new dimensions during the IT era, the first one faces perhaps the most radical changes of the three. Additionally, the digital gap (where some parts of the world are online and others are not, leaving the latter ones disadvantaged), information security, and social media (journalism is not limited to professional journalists any more) all pose new ethical challenges. | ||
Line 50: | Line 62: | ||
== Half empty, or half full? == | |||
An example of the ambiguity of ethical considerations would be the list of online dangers formulated by Attila Krajci in 2000 (they are included in the book by Pinter; see the references below) where each one can also be added a positive point of view: | An example of the ambiguity of ethical considerations would be the list of online dangers formulated by Attila Krajci in 2000 (they are included in the book by Pinter; see the references below) where each one can also be added a positive point of view: | ||
Line 70: | Line 82: | ||
== Tavani's phases of cyberethics == | |||
The phases of cyberethics formulated by Herman Tavani concur in part with the generations of computing in IT history. To illustrate the difference from today, he also uses a legendary quote attributed to the then-CEO of IBM, Thomas J. Watson: "I think there is a world market for maybe five computers" (there are alternate versions with 4 or 6). | The phases of cyberethics formulated by Herman Tavani concur in part with the generations of computing in IT history. To illustrate the difference from today, he also uses a legendary quote attributed to the then-CEO of IBM, Thomas J. Watson: "I think there is a world market for maybe five computers" (there are alternate versions with 4 or 6). | ||
=== Phase I === | |||
1950s and 1960s - stand-alone (non-networked) mainframes. The first attempts on artificial intelligence would bring along the first ethical questions in IT: | 1950s and 1960s - stand-alone (non-networked) mainframes. The first attempts on artificial intelligence would bring along the first ethical questions in IT: | ||
Line 80: | Line 93: | ||
* If machines can be intelligent, then what it means to be human? | * If machines can be intelligent, then what it means to be human? | ||
Privacy is mentioned early on as well, mostly in the context of the Big Brother and large databases. | |||
=== Phase II === | |||
1970s and 1980s - the rise of business sector, first networks (local and wide area networks). The main ethical questions include | 1970s and 1980s - the rise of business sector, first networks (local and wide area networks). The main ethical questions include | ||
Line 89: | Line 103: | ||
* beginning of computer crime, at first in the form of pranks and intrusion (unlawful entry). | * beginning of computer crime, at first in the form of pranks and intrusion (unlawful entry). | ||
=== Phase III === | |||
Since around 1990 - the Web era. Additional issues include | Since around 1990 - the Web era. Additional issues include | ||
Line 98: | Line 113: | ||
* public vs private information | * public vs private information | ||
=== Phase IV === | |||
Near future - merging technologies, ubiquitous computing, smart objects and things, chips, bioinformatics, probably nanocomputing. | Near future - merging technologies, ubiquitous computing, smart objects and things, chips, bioinformatics, probably nanocomputing. | ||
Line 110: | Line 126: | ||
== Should ethics be codified? == | |||
Some consider it a bureaucratic waste of time, but the internal rules of a company, security policy and various other documents are largely done the same way. Besides having a legal status, such documents would help finding suitable people (someone disagreeing with the code of conduct from day one is likely unsuitable for other reasons as well) and even the drafting process itself can help propagate, introduce and discuss the matters. | Some consider it a bureaucratic waste of time, but the internal rules of a company, security policy and various other documents are largely done the same way. Besides having a legal status, such documents would help finding suitable people (someone disagreeing with the code of conduct from day one is likely unsuitable for other reasons as well) and even the drafting process itself can help propagate, introduce and discuss the matters. | ||
Line 117: | Line 133: | ||
== Conclusion == | |||
The base nature of ethics has not changed in the information era, but there are many new questions and some have received new viewpoints. The importance of the field has grown however - due to the ubiquity of IT, the ethical choices made there will significantly influence many other fields (especially those where the technologies start out as non-transparent). Therefore, the base points of ethics should be codified in the future as well. | The base nature of ethics has not changed in the information era, but there are many new questions and some have received new viewpoints. The importance of the field has grown however - due to the ubiquity of IT, the ethical choices made there will significantly influence many other fields (especially those where the technologies start out as non-transparent). Therefore, the base points of ethics should be codified in the future as well. | ||
== Study & Write == | |||
Pick an ethical theory described above. Find and describe a good illustrative example of its IT-related application. | Pick an ethical theory described above. Find and describe a good illustrative example of its IT-related application. | ||
== | == For additional reading == | ||
* ANDERSON, Patrick D. Cypherpunk Ethics: Radical Ethics for the Digital Age. Routledge 2022. | |||
* HIMANEN, Pekka. The Hacker Ethic and the Spirit of the Information Age. Random House Inc. New York, 2001. | * HIMANEN, Pekka. The Hacker Ethic and the Spirit of the Information Age. Random House Inc. New York, 2001. | ||
* NORTHCUTT, Stephen. IT Ethics Handbook: Right and Wrong for IT Professionals. Syngress 2004 | * NORTHCUTT, Stephen. IT Ethics Handbook: Right and Wrong for IT Professionals. Syngress 2004 | ||
Line 134: | Line 151: | ||
* QUINN, Michael J. Ethics for the Information Age. International Edition. 6th ed. Pearson 2015 | * QUINN, Michael J. Ethics for the Information Age. International Edition. 6th ed. Pearson 2015 | ||
* TAVANI, Herman T. Ethics & Technology: Ethical Issues in an Age of Information and Communication Technology. John Wiley & Sons, Danvers 2007. | * TAVANI, Herman T. Ethics & Technology: Ethical Issues in an Age of Information and Communication Technology. John Wiley & Sons, Danvers 2007. | ||
[[E-SPEAIT | Back to the course page]] | |||
---- | |||
The content of this course is distributed under the [https://creativecommons.org Creative Commons] [https://creativecommons.org/licenses/by-sa/3.0/deed.et Attibution-ShareAlike 3.0 Estonian license] (English: [https://creativecommons.org/licenses/by-sa/3.0/deed.en CC Attribution-ShareAlike], or CC BY-SA) or any newer version of the license. |
Latest revision as of 14:13, 25 November 2024
IT and Ethics
Ain't good and bad the same as in times of old?
Ethic in its classic sense describes the rules and standards that regulate the behaviour of an individual towards others. On the one hand, most 'golden rules' do apply in the age of Internet as well. On the other hand, it has brought along a row of new ethical questions, some of which have made it necessary to reconsider old ideas (everything about 'intellectual property' is a good example). Let us imagine meeting Socrates or any other classical sage and asking questions from him - he would answer those dealing with his known world easily. But what about his opinion about things like spam, trolling or 'letters from a Nigerian prince'?
Or, is it another tempest in a teapot? Not quite: as it has been said before, Internet is, above all, people. It is a huge community where people physically by thousands of miles apart are able to directly influence each other. And as seen before, without a certain critical mass of ethic, we will get a cyber-dump instead of cyberspace.
The 'Death by Dung' is most likely if the disregard toward ethic would prevail. Just as there are sociopathic businesspeople and politicians in real life who would damage the environment in the name of personal profit, similar 'lower life forms' do exist online as well.
Different points of view
Herman Tavani in his Ethics and Technology talks about cyberethics, as the terms of computer ethics and even internet ethics would not show enough the human(s) behind the technology. According to him, the cyberethics can have different points of view, for instance
- IT - ethical challenges stemming from the adoption of new technologies.
- Philosophy - putting the tech-related ethical questions to a larger, 'Big Picture' context.
- Social and behavioural sciences - measuring the impact of new technologies to social institutions and various groups in the society.
- Information sciences - ethical problems related to legal topics (e.g. copyright etc), censorship and freedom of speech online.
Tavani also proposes three different approaches:
- Professional ethics - predominantly the view of computer, natural and information sciences, the issues include professionalism, responsibilities, risks, safety and reliability, codes of conduct etc.
- Philosophical ethics - the philosophical and legal view on issues like privacy, anonymity, copyright, freedom of speech etc.
- Descriptive ethics - the view of social sciences on e.g. the impact of technology on various institutions (government, education etc) and social groups (e.g. by sex/gender, age, ethnicity etc).
Some ethical theories
Michael J. Quinn in his Ethics for the Information Age has listed different ethical theories that might be used in an information society:
- Subjective Relativism (Moral Relativism) - while Relativism denies the existence of universal morality, Subjective Relativism proposes that each individual has his/her own Right and Wrong (the maxim "What’s right for you may not be right for me").
- Cultural Relativism - this theory sees the Right and Wrong in the context of specific cultures, capable of changing both in time (different eras) and space (different locations).
- Divine Command Theory - as the ethical cornerstone of three large 'book religions' (Judaism, Christianity, Islam), this theory bases the Right and Wrong on the divine will and commands conveyed in the Scriptures.
- Ethical Egoism - this theory is perhaps best seen in the novels by Ayn Rand. According to it, the long-term personal benefit should be the sole criterion for the Right; barter is seen as a foundational principle in human relationships - while Ethical Egoism does not rule out helping others, it is only considered reasonable in case of mutual benefit.
- Kantianism - the theory stands on the works of the German philosopher Immanuel Kant, who tried to formulate universal ethics setting an universal code of conduct. His main thesis known as the Categorícal Imperative has two formulations:
- the principle of autonomy (First Formulation): Act only from moral rules that you can at the same time will to be universal moral laws.
- the principle of motives (Second Formulation): Act so that you always treat both yourself and other people as ends in themselves, and never only as a means to an end.
- Act Utilitarism (also Direct Utilitarism) - the theory by the English philosophers Jeremy Bentham and John Stuart Mill has utility as a central tenet (the greatest happiness principle: An action is right (or wrong) to the extent that it increases (or decreases) the total happiness of the affected parties. Note that according to this principle, it is possible to act right for wrong reasons, and vice versa.
- Rule Utilitarism (also Indirect Utilitarism) - this theory applies utility as a measuring stick to rules rather than directly to actions; the act is deemed right if the rule mandating it is right. According to the theory, the right rules are the ones that, when used as moral code, bring more happiness (to all parties combined) than other rules. The approach is somewhat similar to Kant's, but while Kant stresses motives, this theory considers the actual results.
- The Social Contract Theory was first formulated by Thomas Hobbes in his book Leviathan and later added to by Locke and Jean-Jacques Rousseau. According to it, the society should strive to develop a set of rules that make sense to everyone (making people follow them voluntarily). For instance, driving on the right (or in some places, left) could be a common example - drivers keep to the right not for fearing the police but to avoid confusion and possible crashes.
- The Rawls' Theory of Justice by John Rawls stems from two assumptions:
- Each person may claim a “fully adequate” number of basic rights and liberties, so long as these claims are consistent with everyone else having a claim to the same rights and liberties.
- Any social and economic inequalities must satisfy two conditions: first, they are associated with positions in society that everyone has a fair and equal opportunity to assume (e.g. by obtaining necessary education); and second, they are "to be to the greatest benefit of the least-advantaged members of society (the difference principle; an example could be gradual taxation).
- The Virtue Ethics can be traced back to ancient Greece (perhaps most notably, Aristotle). According to it, a right action is an action that a virtuous person, acting in character, would do in the same circumstances. A virtuous person is a person who possesses and lives out the virtues. The virtues are those character traits human beings need in order to flourish and be truly happy. Aristotle also distinguishes between the intellectual and moral virtues, considering the latter more important (as they are innate, personal traits rather than learned behaviour).
In his Cypherpunk Ethics, Patrick D. Anderson adds some other approaches stemming from the Cypherpunk movement. Note: while the actual movement started in the early 1990s, its roots go back to the work of Whitfield Diffie and Martin Hellman on public-key cryptography in the 1970-s.
The first ethical approach was the original crypto-anarchy of Timothy C. May, Eric Hughes and others, but also the anarcho-capitalist ideas of David Friedman. This school of thought stresses the “Leave me alone” enabled by strong cryptography, decentralization and freedom as central tenets. Crypto is used as if a high-tech lock against government surveillance and power projection.
Another approach outlined by Anderson is the crypto-justice of Julian Assange. It is a kind of crypto-backed virtue ethics striving towards a “more just society” (connections also to social contract and Rawls’ theories), e.g. crypto enabled WikiLeaks. Here, crypto is used more actively as a disruptor of power relations (“Privacy for the weak, transparency for the powerful”).
Main fields of discussion
The previous theories have become the most contested in three large technology-related fields: rewarding creativity (e.g. copyright and other similar issues), privacy, and censorship. While the latter two have acquired new dimensions during the IT era, the first one faces perhaps the most radical changes of the three. Additionally, the digital gap (where some parts of the world are online and others are not, leaving the latter ones disadvantaged), information security, and social media (journalism is not limited to professional journalists any more) all pose new ethical challenges.
There are many ethical questions which are totally new. A good example would be domain squatting online - someone would buy a lot of domains in the hopes of someone needing some of them later, or snatch one away before some interested party can act (in both cases, potential profit is the main motive). The later case is already better regulated today, allowing 'returning' the squatted domain to the 'justified party' (e.g. someone registering drinkcoke.ee would likely have to hand it over to the Coca-Cola corporation soon enough). There are still risks of both identity theft and extortion. An interesting case was the apple.ee dispute in 2008 where, surprisingly enough, the large multinational did not take part at all (their local representative used mac.ee instead). Rather it was about an Estonian designer whose family name was Õun (apple in Estonian) and someone attempting to wrestle the control over the domain to himself. Unfortunately there are only Estonian-language references to be found (e.g. https://arileht.delfi.ee/archive/koolipoiss-pressib-domeeni-valja?id=18628153).
Half empty, or half full?
An example of the ambiguity of ethical considerations would be the list of online dangers formulated by Attila Krajci in 2000 (they are included in the book by Pinter; see the references below) where each one can also be added a positive point of view:
- Trust: "You never know who is on the other side" vs "you can have a carte blanche, ridding you of earlier loads".
- Authenticity: "What you find cannot be trusted" vs "you can look at the information itself rather than external authority".
- Sense of reality: "Things go unreal if you are online too much" vs "sometimes, the cyberspace is what someone needs in order to open up".
- Alienation: "net addicts get alienated from others" vs "sometimes a way to escape is necessary".
- Identity: "you can be whoever you want until you do not know anymore who you are" vs "you can be whoever you want and stay yourself".
- Aggression: "computer games make you aggressive" vs "games can teach very different things".
- Extremes: "Internet has porn, pedophiles and brainwashers" vs "sometimes one needs to see wrong to know right".
- Communication: "Internet does not allow using the whole spectrum of communication" vs "Internet adds new ways of communication, sometimes by seemingly truncating them".
- Noise: "you get lost in the mass of information" vs "there will be totally new ways to extract what you need".
Thus, it is not possible to say that one is right and the other is not. A similar approach is also used by Stephen Northcutt in his IT Ethics Handbook, describing a large number of ethical dilemmas and providing two radically different answers from different viewpoints.
The book by Pinter also describes two approaches to IT - the technophile and technophobic views. The former looks at Internet as a kind of Cyber-Athens (in the classic, ancient sense) - the agora or meetup is even more effective in the cyberspace, promoting direct democracy and free society. The latter view, on the contrary, suggests that the result will be an Orwellian surveillance society with the Big Brother watching everywhere and should technology become advanced enough, the machines may come to the question about the necessity of humans ("You're a plague and we are the cure").
The middle way between the two extremes have been sought for a long time. An interesting initiative was the Technorealism movement starting with an eponymous manifesto in 1998. While their ideas had varying weight (see a critical comment at http://www.zpub.com/aaa/techreal.html) and the movement predated the social media era, a similar balancing force would still be needed in today's world.
Tavani's phases of cyberethics
The phases of cyberethics formulated by Herman Tavani concur in part with the generations of computing in IT history. To illustrate the difference from today, he also uses a legendary quote attributed to the then-CEO of IBM, Thomas J. Watson: "I think there is a world market for maybe five computers" (there are alternate versions with 4 or 6).
Phase I
1950s and 1960s - stand-alone (non-networked) mainframes. The first attempts on artificial intelligence would bring along the first ethical questions in IT:
- Can machines think? If yes, should we build a thinking machine?
- If machines can be intelligent, then what it means to be human?
Privacy is mentioned early on as well, mostly in the context of the Big Brother and large databases.
Phase II
1970s and 1980s - the rise of business sector, first networks (local and wide area networks). The main ethical questions include
- personal privacy (adding the network and business aspects to the former phase).
- rise of 'intellectual property' - the problems related to unauthorized copying.
- beginning of computer crime, at first in the form of pranks and intrusion (unlawful entry).
Phase III
Since around 1990 - the Web era. Additional issues include
- freedom of speech
- anonymity
- legislation
- trust
- public vs private information
Phase IV
Near future - merging technologies, ubiquitous computing, smart objects and things, chips, bioinformatics, probably nanocomputing.
Moral transparency of technology
Cyberethics is often descriptive (non-normative; avoids judgement) rather than normative (judges the act or situation as right or wrong). However, the normative approach has its place, and in some cases, the normativeness depends on the technology in question:
- Transparent - everything is clear, the users understand both the technology (at least on the base level) and related moral choices (e.g. phone network and the ethics of surveillance).
- Non-transparent with known features - the users understand the main principles but may not realize the related moral choices (e.g. Google).
- Non-transparent with unknown features - the users do not understand neither principles (black box) nor any moral factors (e.g. Internet of Things).
Should ethics be codified?
Some consider it a bureaucratic waste of time, but the internal rules of a company, security policy and various other documents are largely done the same way. Besides having a legal status, such documents would help finding suitable people (someone disagreeing with the code of conduct from day one is likely unsuitable for other reasons as well) and even the drafting process itself can help propagate, introduce and discuss the matters.
An example of a code at a large company is provided by IBM.
Conclusion
The base nature of ethics has not changed in the information era, but there are many new questions and some have received new viewpoints. The importance of the field has grown however - due to the ubiquity of IT, the ethical choices made there will significantly influence many other fields (especially those where the technologies start out as non-transparent). Therefore, the base points of ethics should be codified in the future as well.
Study & Write
Pick an ethical theory described above. Find and describe a good illustrative example of its IT-related application.
For additional reading
- ANDERSON, Patrick D. Cypherpunk Ethics: Radical Ethics for the Digital Age. Routledge 2022.
- HIMANEN, Pekka. The Hacker Ethic and the Spirit of the Information Age. Random House Inc. New York, 2001.
- NORTHCUTT, Stephen. IT Ethics Handbook: Right and Wrong for IT Professionals. Syngress 2004
- PINTER, Robert (ed). Information Society: Coursebook. Gondolat - Ǔj Mandǎtum 2008.
- QUINN, Michael J. Ethics for the Information Age. International Edition. 6th ed. Pearson 2015
- TAVANI, Herman T. Ethics & Technology: Ethical Issues in an Age of Information and Communication Technology. John Wiley & Sons, Danvers 2007.
The content of this course is distributed under the Creative Commons Attibution-ShareAlike 3.0 Estonian license (English: CC Attribution-ShareAlike, or CC BY-SA) or any newer version of the license.