Home » Blog » Arhiva » The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility

The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility

IT & C - Descarcă PDFSfetcu, Nicolae (2024), The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility, IT & C, 3:4, 48-64, DOI: 10.58679/IT38020, https://www.internetmobile.ro/the-ethics-of-artificial-intelligence-balancing-innovation-and-responsibility/

 

Abstract

This is an extended article based on the book Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence (Sfetcu 2024). As AI systems evolve and become more pervasive in our daily lives, the ethical considerations surrounding their development, deployment, and impact have come to the forefront. The ethics of artificial intelligence span a broad spectrum of issues, ranging from privacy and bias to accountability and the societal consequences of automation. This articledelves into these ethical dimensions, highlighting the importance of adopting a balanced approach that encourages innovation while protecting human values and rights.

Keywords: artificial intelligence, ethics, responsibility, ethical principles, ethical challenges, robots

Etica inteligenței artificiale: echilibrarea inovației și a responsabilității

Rezumat

Acesta este un articol extins având ca sursă cartea Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence (Sfetcu 2024). Pe măsură ce sistemele de inteligență artificială evoluează și devin mai răspândite în viața noastră de zi cu zi, considerentele etice legate de dezvoltarea, implementarea și impactul lor au ajuns în prim-plan. Etica inteligenței artificiale acoperă un spectru larg de probleme, de la confidențialitate și părtinire până la responsabilitate și consecințele societale ale automatizării. Acest articol analizează aceste dimensiuni etice, subliniind importanța adoptării unei abordări echilibrate care încurajează inovația, protejând în același timp valorile și drepturile omului.

Cuvinte cheie: inteligența artificială, etică, responsabilitate, principii etice, provocări etice, roboți

 

IT & C, Volumul 3, Numărul 4, Decembrie 2024, pp. 48-64
ISSN 2821 – 8469, ISSN – L 2821 – 8469, DOI: 10.58679/IT38020
URL: https://www.internetmobile.ro/the-ethics-of-artificial-intelligence-balancing-innovation-and-responsibility/
© 2024 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility

Ing. fiz. Nicolae SFETCU[1], MPhil

nicolae@sfetcu.com

[1] Cercetător – Academia Română – Comitetul Român de Istoria și Filosofia Științei și Tehnicii (CRIFST), Divizia de Istoria Științei (DIS), ORCID: 0000-0002-0162-9973

 

Introduction

Artificial intelligence (AI) has emerged as a pivotal technology of the 21st century, propelling innovation across a wide array of sectors, including healthcare, finance, transportation, and more. As AI systems evolve and become more pervasive in our daily lives, the ethical considerations surrounding their development, deployment, and impact have come to the forefront. The ethics of artificial intelligence span a broad spectrum of issues, ranging from privacy and bias to accountability and the societal consequences of automation. This essay delves into these ethical dimensions, highlighting the importance of adopting a balanced approach that encourages innovation while protecting human values and rights.

Both human and artificial intelligence have made remarkable progress in recent decades. Human intelligence has led to innovations in science, technology, art, and government, shaping the world in profound ways. Meanwhile, AI has revolutionized industries such as healthcare, finance, transportation and entertainment, providing unprecedented capabilities in data analysis, automation and decision-making.

However, these advances also raise ethical considerations and societal implications. Ethical dilemmas regarding AI governance, mitigating bias, and preserving privacy require urgent attention. As AI systems become increasingly autonomous, ensuring transparency, accountability and fairness is critical. Additionally, the socioeconomic ramifications of widespread AI adoption deserve careful deliberation. While AI has the potential to enhance human capabilities and alleviate societal challenges, it also poses risks, such as replacing jobs and exacerbating inequality. Navigating these complex ethical and societal issues will require collaborative efforts by policymakers, technologists, and stakeholders from different fields.

The ethics of artificial intelligence involves two aspects: the moral behavior of humans in the design, manufacture, use and treatment of artificially intelligent systems, and the behavior (ethics) of machines, including the case of a possible singularity due to superintelligent AI. (Müller 2023)

Robot ethics („roboetics”) deals with the design, construction, use, and treatment of robots as physical machines. Not all robots use AI systems and not all AI systems are robots. (Müller 2023)

Machine ethics (or machine morality) deals with the design of artificial moral agents (AMAs), robots or artificially intelligent computers that behave morally or as if they were moral (Anderson and Anderson 2011). Common characteristics of the agent in philosophy, such as the rational agent, the moral agent, and the artificial agent, are related to the concept of AMA (Boyles 2017).

Machine ethics deals with adding or ensuring moral behaviors to machines using artificial intelligence (artificially intelligent agents) (J.H. Moor 2006).

Isaac Asimov in 1950 in I, Robot proposed the three laws of robotics, then tested the limits of these laws (Asimov 2004).

James H. Moor defines four types of ethical robots: ethical impact agent, implicit ethical agent (to avoid unethical outcomes), explicit ethical agent (process scenarios and act on ethical decisions), and fully ethical agent (able to makes ethical decisions, in addition to human metaphysical traits). A machine can include several such types (James H. Moor 2009).

The term „machine ethics” was coined by Mitchell Waldrop in the 1987 IA journal article „A Question of Responsibility”:

„Intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics” (Waldrop 1987)

To increase efficiency and avoid bias, Nick Bostrom and Eliezer Yudkowsky argued for decision trees over neural networks and genetic algorithms, as decision trees conform to modern social norms of transparency and predictability (Bostrom and Yudkowsky 2018). Chris Santos-Lang championed neural networks and genetic algorithms (Santos-Lang 2015). In 2009, in an experiment, AI robots were programmed to cooperate with each other using a genetic algorithm. The robots then learned to lie to each other in an attempt to hoard resources from other robots (S. Fox 2009), but they also engaged in altruistic behavior by signaling danger to each other and even giving their lives to save other robots. The ethical implications of this experiment have been contested by machine ethicists.

In 2009, at a conference, it was discussed that some machines have acquired various forms of semi-autonomy; also, some computer viruses can avoid their removal and have acquired „bug intelligence” (S. Fox 2009).

There is currently a heated debate about the use of autonomous robots in military combat (Palmer 2009), and the integration of general artificial intelligence into existing legal and social frameworks (Sotala and Yampolskiy 2014).

Nayef Al-Rodhan mentions the case of neuromorphic chips, a technology that could support the moral competence of robots (Al-Rodhan 2015).

Ethical principles of AI

AI decision-making raises questions of legal responsibility and copyright status of created works (Guadamuz 2017). Friendly AI involves machines designed to minimize risks and make choices that benefit humans (Yukdowsky 2008). The field of machine ethics provides principles and procedures for solving ethical dilemmas, founded at an AAIA symposium in 2005 (AAAI 2014).

As AI systems evolve to become more autonomous and capable of making decisions with significant ramifications, the issue of accountability becomes increasingly critical. Who bears responsibility when an AI system makes an error or inflicts harm? Is it the developers responsible for crafting the system, the organizations that deploy it, or the AI system itself? These questions become particularly complex in scenarios where AI systems operate within intricate environments, such as autonomous vehicles or healthcare diagnostics, where errors can have life-threatening consequences.

From an ethical standpoint, it is essential to establish definitive accountability frameworks for AI systems. This includes the establishment of mechanisms that allow individuals to seek redress when adversely affected by AI decisions. It also involves the development of standards for the creation and deployment of AI systems, including rigorous testing and validation procedures, to mitigate the risk of harm. Moreover, the potential for AI systems to be used as a justification for decisions that would otherwise be deemed unethical by attributing them to an „impartial” machine warrants careful examination to prevent the relinquishment of human responsibility.

The regulation of artificial intelligence is the development of public sector policies and laws to promote and regulate artificial intelligence and by implication algorithms, an emerging issue in jurisdictions globally (Law Library of Congress (U.S.) 2019). Between 2016 and 2020, more than 30 countries adopted dedicated AI strategies. Most EU member states have launched national AI strategies, as have Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, the United Arab Emirates, the US and Vietnam. Others are in the process of developing their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating the need for AI to be developed in line with human rights and democratic values (UNESCO 2021), to ensure public trust in the technology. In the US, Henry Kissinger, Eric Schmidt and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI (Sfetcu 2021).

In the review of 84 ethics guidelines for AI, 11 groups of principles were found: transparency, justice and fairness, non-maleficence, responsibility, confidentiality, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity (Jobin, Ienca, and Vayena 2019).

Luciano Floridi and Josh Cowls created an ethical framework of AI principles based on four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI-enabling principle – explainability (Floridi and Cowls 2019).

Bill Hibbard argues that AI developers have an ethical obligation to be transparent in their work (Hibbard 2016). Ben Goertzel and David Hart created OpenCog as an open-source framework for AI development (Hart and Goertzel 2016). OpenIA is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity (Metz 2016).

Many researchers recommend government regulation as a means of ensuring transparency, although critics worry that it will slow the rate of innovation (UN 2017).

There is a huge volume of proposed ethical principles for AI—already more than 160 in 2020, according to Algorithm Watch’s global inventory of AI Ethics Guidelines (AlgorithmWatch 2024), which threatens to overwhelm and confuse.

On June 26, 2019, the European Commission’s High Level Expert Group on Artificial Intelligence (IA HLEG) published „Policy and Investment Recommendations for Trustworthy AI”, covering four main topics: people and society at large, research and academia, the private sector and the public sector. The European Commission claims that „the HLEG recommendations reflect both the opportunities that AI technologies can drive economic growth, prosperity and innovation, as well as the potential associated risks”, and states that the EU aims to lead the development of policies governing AI internationally. On April 21, 2021, the European Commission proposed the Artificial Intelligence Law (EU 2024).

According to Mihalis Kritikos (Kritikos 2019), the development of AI in a regulatory and ethical vacuum has sparked a series of debates about the need for its legal control and ethical oversight. AI-based algorithms that perform automated reasoning tasks appear to control increasing aspects of our lives by implementing institutional decision-making based on big data analysis and have, in fact, made this technology an influential standard-setter.

The impact of existing AI technologies on the exercise of human rights, from freedom of expression, freedom of assembly and association, the right to privacy, the right to work and the right to non-discrimination to equal protection of the law, must be carefully examined and qualified together with the potential of AI to exacerbate inequalities and widen the digital divide. Given AI’s potential to act in an autonomous manner, its sheer complexity and opacity, and the uncertainty surrounding its operation, make a comprehensive regulatory response essential to prevent ever-expanding applications from causing social harm in a very heterogeneous range of individuals and social groups.

Such a response should include obligating AI algorithm developers to fully respect the human rights and civil liberties of all users, maintaining uninterrupted human control over AI systems, addressing the effects of emotional connection and attachment between humans and robots and develop common standards against which a judicial authority using AI will be evaluated. It should also focus on the allocation of responsibilities, rights and duties and prevent the reduction of the legal governance process to a mere technical optimization of machine learning and algorithmic decision-making procedures. In this framework, new collective data rights must be introduced, which will protect the ability to refuse to be profiled, the right to appeal and the right to explanation in decision-making frameworks based on artificial intelligence.

In addition, legislators must ensure that organizations implementing and using these systems remain legally responsible for any caused harm and develop sustainable and proportionate informed consent protocols (Kritikos 2019).

2017 European Parliament Resolution on civil law rules in robotics – comprising a ‘code of ethical conduct for robotics engineers’, a ‘code for research ethics committees’, a ‘designers’ license’ and a ‘users’ license ” can serve as a governance model for a detailed process-based architecture of AI technology ethics (Kritikos 2019).

The European Commission appointed the High-Level Expert Group (HLEG) on AI in 2018, one of their tasks is to define ethical guidelines for trustworthy AI. For an AI system to be reliable, it should ensure the following three components throughout the entire life cycle of the system (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Lawfully, in compliance with all applicable laws and regulations,
  2. Ethical, ensuring adherence to ethical principles and values,
  3. Robust, technically and socially.

The four ethical principles that are rooted in fundamental rights that must be respected to ensure that AI systems are developed, implemented and used in a trustworthy way are (Joint Research Centre (European Commission), Samoili, et al. 2020):

  • Respect for human autonomy,
  • Preventing harm,
  • Fairness,

These are reflected in legal requirements (the scope of legal AI, which is the first component of trusted AI).

Stakeholder responsibilities (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Developers: must implement and apply the requirements of the design and development processes.
  2. Implementers: must ensure that the systems they use and the products and services they offer meet the requirements.
  3. End users and society in general: must be informed of these requirements and be able to demand that they be respected.

Requirements that include systemic, individual and societal aspects (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Human agency and surveillance, including fundamental rights and human surveillance.
  2. Technical robustness and security, including security and resistance to attacks, fallback plan and overall security, accuracy, reliability and reproducibility.
  3. Data privacy and governance, including respect for confidentiality, data quality and integrity, and access to data.
  4. Transparency, including traceability, explainability and communication.
  5. Diversity, non-discrimination and fairness, including avoidance of unfair bias, accessibility and universal design and stakeholder participation.
  6. Societal and environmental well-being, including sustainability and environmental friendliness, social impact, society and democracy.

Implementation of these requirements should occur throughout the entire life cycle of an AI system and depends on the specific application.

In June 2016, Satya Nadella, CEO of Microsoft Corporation, in an interview with Slate magazine recommended the following principles and goals for artificial intelligence (Vincent 2016):

  • „AI must be designed to assist humanity”, i.e. human autonomy must be respected.
  • “AI must be transparent,” meaning that people should know and be able to understand how it works.
  • „AI must maximize efficiency without destroying human dignity.”
  • “AI must be designed for intelligent privacy,” meaning it earns trust by protecting their information.
  • “AI must have algorithmic accountability” so humans can undo unintended harm.
  • “AI must guard against bias” so that it does not discriminate against humans.

In 2017, the Asilomar AI principles (Asilomar 2017) were embraced by an impressive list of 1273 AI/robotics researchers and others (Joint Research Centre (European Commission), Samoili, et al. 2020):

  • Provide a broad framework about research objectives, funding and policy linkage.
  • Ethics and values consider safety, transparency, judicial transparency, accountability, human values, personal privacy, common benefits, human control, AI arms race, etc.
  • Discuss the long-term issues and risk of possible superintelligence, mitigation of the threat posed by AI systems, etc.

The OECD Principles for AI (OECD 2024a) identify five complementary principles (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. AI should benefit all people by fostering inclusive growth, sustainable development and well-being.
  2. Artificial intelligence systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based results and can challenge them.
  4. AI systems must operate in a robust and secure manner throughout their life cycle, and potential risks should be continuously assessed and managed.
  5. Organizations and individuals developing, implementing or operating AI systems should be held accountable for their proper functioning in accordance with the above principles.

Other sets of principles adopted to date (Floridi 2023):

  • The Montreal Declaration(Université de Montréal 2017) developed under the auspices of the University of Montreal following the Forum on the Socially Responsible Development of AI in November 2017;
  • OECD developed Council recommendations on AI(OECD 2024b);
  • The „five general principles for an AI code” from the report of the United Kingdom House of Lords Select Committee on Artificial Intelligence(House of Lords 2017, par. 417) published in April 2018;
  • Partnership Principles on AI(Partnership on AI 2024) published in collaboration with academics, researchers, civil society organizations, companies building and using AI technology and other groups.
  • China has released its own AI principles, called the Beijing AI Principles.
  • The Google AI Principles(Google 2024) focus on building socially beneficial artificial intelligence.

Many AI ethics committees have been formed, including the Stanford Institute for Human-Centered AI (HAI), the Alan Turing Institute, the AI Partnership, IA Now, IEEE, and others. Research advances make it possible to develop evaluation frameworks for fairness, transparency and accountability (Joint Research Centre (European Commission), Samoili, et al. 2020).

Ethical challenges of AI

If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it might have certain rights (Russell and Norvig 2016, 964) on a common spectrum with animal rights and human (Henderson 2007).

Many academics and governments dispute the idea that AI can be held accountable per se (Bryson, Diamantis, and Grant 2017). Also, some experts and academics disagree with the use of robots in military combat, especially if they have autonomous functions (Palmer 2009).

Currently attempts are being made to create tests to see if an AI is capable of making ethical decisions. The Turing test is considered insufficient. A specific proposed test is the Ethical Turing Test, where several judges decide whether the AI’s decision is ethical or unethical (A. F. Winfield et al. 2019).

Privacy and surveillance: A critical ethical issue associated with AI is privacy. AI systems frequently depend on extensive data to operate efficiently, leading to the collection, storage, and analysis of personal information on an unprecedented scale. This data encompasses a variety of aspects, including online activities, purchasing patterns, biometric data, and personal communications. The potential for the misuse of this data, whether by governmental entities, corporations, or nefarious actors, poses significant ethical concerns.

Surveillance is a particularly contentious area. AI-driven surveillance technologies, such as facial recognition, are capable of monitoring and tracking individuals without their consent, thereby violating privacy rights. In certain instances, these technologies have been implemented in ways that disproportionately affect marginalized communities, furthering existing social disparities. The ethical challenge here is to reconcile the potential advantages of such technologies, such as improved security and crime prevention, with the imperative to safeguard individual privacy and prevent abuses of authority.

Addressing bias and fairness: : AI systems are vulnerable to biases and errors introduced by their human creators and the data used to train these systems (Gabriel 2018). One solution to addressing bias is to create documentation for the data used to train AI systems (Bender and Friedman 2018).

Artificial Intelligence (AI) systems are frequently praised for their capacity to make decisions grounded in data rather than the subjective nature of human intuition, which is susceptible to errors and biases. Nonetheless, it is imperative to acknowledge that AI is not exempt from these biases. In fact, AI systems have the potential to perpetuate and even exacerbate the biases inherent in the data upon which they are trained. For example, if an AI system is trained on historical hiring data that reflects gender or racial biases, it may inadvertently replicate these biases in its recommendations, resulting in inequitable outcomes.

The ethical challenge at hand is to ensure that AI systems are developed and implemented in a manner that fosters fairness and avoids discrimination against individuals or groups. This necessitates a meticulous consideration of the data utilized for training AI systems, alongside continuous monitoring to identify and rectify biases. Furthermore, it prompts inquiries regarding the transparency of AI’s decision-making processes and the capacity of individuals to contest and comprehend these decisions.

Robot rights: A concept that humans should have moral obligations to their machines, similar to human rights or animal rights (W. Evans 2015). Thus, in October 2017, the android Sophia was granted citizenship in Saudi Arabia (Hatmaker 2017). The philosophy of sentientism accords degrees of moral consideration to all sentient beings, including artificial intelligence if it is proven to be sentient.

Unlike humans, AGIs can be copied in any number of copies. Are the copied copies the same person or several different people? Do I get one vote or more? Is deleting one of the children a crime? Would treating AGI like any other computer programs constitute brainwashing, slavery and tyranny? (Deutsch 2012)

Threat to human dignity: Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace humans in positions that require respect and care (Weizenbaum 1976). John McCarthy contradicts him: „When moralizing is both vehement and vague, it invites authoritarian abuse” (McCarthy 2000).

Artificial intelligence stands poised to significantly alter the employment landscape, with the threat of job displacement looming large across various sectors, from manufacturing to customer service. While proponents argue that AI will spawn new employment opportunities and stimulate economic growth, there are concerns regarding the potential social and economic upheaval stemming from widespread job displacement. The ethical imperative here is to navigate the transition towards an AI-centric economy in a manner that minimizes adverse effects and guarantees the equitable distribution of AI’s benefits.

This necessitates the implementation of policies aimed at supporting workers displaced by automation, including retraining initiatives and social safety nets. Furthermore, a comprehensive examination of the societal ramifications of AI is essential. For instance, the concentration of AI research and development within a select few large technology corporations raises issues concerning power centralization and the possibility of AI exacerbating existing societal inequalities. Consequently, the ethical development of AI should encompass efforts to democratize access to AI technologies and ensure their utilization for societal welfare, rather than merely for-profit maximization.

Liability for self-driving cars: There is a debate about legal liability in the event of an accident. If a driverless car hit a pedestrian, who was to blame for the accident: the driver, the pedestrian, the builder, or the government?  (Shinn 2021)

Weapons that include AI: Many experts and academics oppose the use of autonomous robots in military combat. There is a possibility that robots will develop the ability to make their own logical decisions in killing. This includes autonomous drones. Stephen Hawking and Max Tegmark signed a „Future of Life” petition (Asilomar 2017) to ban AI-equipped weapons (Musgrave and Roberts 2015).

Opaque algorithms: Machine learning with neural networks can lead to AI decisions that the humans who programmed them cannot explain. Explainable AI encompasses both explainability (summarizing neural network behavior and increasing user confidence) and interpretability (understanding what a model has done or could do) (Bunn 2020).

Regardless of whether we are talking about a weak AI or a strong AI (AGI), the imposition of norms, compliance with them can take three possible directions: a) strict compliance with these norms; b) own different interpretation of the imposed norms (with the possibility of deviating from the projected objectives); and c) (in the case of AGI only) developing very different norms and ethics of their own.

The laws of robots

The first ethical laws are considered to be the 10 commandments present three times in the Old Testament, being dictated according to the Bible by God to Moses (Coogan 2014, 27, 33), a set of biblical principles related to ethics and worship originating in the Jewish tradition which plays a fundamental role in Judaism and Christianity.

The best-known set of laws for robots are those written by Isaac Asimov in the 1940s, introduced in his 1942 short story „Runaround”:

  1. A robot cannot harm a human being or, through inaction, allow a human being to do harm.
  2. A robot must obey orders given by human beings, unless such orders conflict with the First Law.
  3. A robot must protect its own existence if such protection does not conflict with the First or Second Law(Asimov 2004).

In The Evitable Conflict, the First Law for Machines is generalized:

  1. No machine can harm humanity; or, by inaction, cannot allow harm to be done to mankind.

In Foundation and Earth, a zero law was introduced, with the original three duly rewritten as subordinate to it:

  1. A robot cannot harm humanity or, through inaction, allow humanity to be harmed.

In 2011, the UK’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published a set of five „ethical principles for designers, builders and users of robots” in the real world, along with seven ‘high-level messages’ (A. Winfield 2011):

Ethical principles:

  1. Robots should not be designed solely or primarily to kill or injure humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that ensure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or addiction. It should always be possible to distinguish between a robot and a human.
  5. It should always be possible to find out who is legally responsible for a robot.

A terminology for the legal assessment of robot developments is being implemented in Asian countries (BBC 2007).

Mark W. Tilden proposed a series of principles/rules for robots (Dunn 2009):

  1. A robot must protect its existence at all costs.
  2. A robot must obtain and maintain access to its own power source.
  3. A robot must continuously search for better energy sources.

Ethical machines

Friendly AI are machines that have been designed from the ground up to minimize risk and make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and must be completed before AI becomes an existential risk.

Intelligent machines have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for solving ethical dilemmas. Machine ethics is also called machine morality, computational ethics, or computational morality, and was founded at an AAIA symposium in 2005.

Other approaches include Wendell Wallach’s „artificial moral agents” and Stuart J. Russell’s three principles for developing machines that prove beneficial (Sfetcu 2021).

Conclusion

As AI technology progresses, the distinction between human and machine-based decision-making is becoming increasingly indistinct. This phenomenon raises profound ethical considerations regarding the future of human autonomy and agency. For example, if AI systems can make decisions that are indistinguishable from those made by humans, what implications does this have for the concept of free will? Moreover, the integration of AI systems into various facets of everyday life poses a risk of individuals becoming excessively dependent on these technologies, potentially leading to a diminution of critical thinking skills and human judgment.

From an ethical standpoint, it is imperative to contemplate the long-term effects of AI on human society and culture. This includes the design of AI systems to augment, rather than detract from, human capabilities and values. It also involves the promotion of a public discourse on the societal role of AI and the encouragement of AI technology development that is in harmony with human interests.

In conclusion, the ethical considerations surrounding artificial intelligence are multifaceted and continuously evolving, addressing some of the most critical questions concerning the future of human society. As AI technology advances, it is crucial to approach its development and application with a robust ethical framework that balances innovation with responsibility. This includes the protection of privacy, the assurance of fairness, the establishment of accountability, the management of the societal impact of automation, and the preservation of human autonomy. By addressing these ethical challenges with foresight, we can leverage the transformative potential of AI while ensuring its benefits are realized for the greater good.

Bibliography

  • AAAI. 2014. “Machine Ethics.” November 29, 2014. https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06.
  • AlgorithmWatch. 2024. “AI Ethics Guidelines Global Inventory.” AlgorithmWatch. 2024. https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/.
  • Al-Rodhan, Nayef. 2015. “The Moral Code.” Foreign Affairs, August 12, 2015. https://www.foreignaffairs.com/moral-code.
  • Anderson, Michael, and Susan Leigh Anderson, eds. 2011. Machine Ethics. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511978036.
  • Asilomar. 2017. “Asilomar AI Principles.” Future of Life Institute (blog). 2017. https://futureoflife.org/open-letter/ai-principles/.
  • Asimov, Isaac. 2004. I, Robot. Bantam Books.
  • BBC. 2007. “Robotic Age Poses Ethical Dilemma,” March 7, 2007. http://news.bbc.co.uk/2/hi/technology/6425927.stm.
  • Bender, Emily M., and Batya Friedman. 2018. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (December):587–604. https://doi.org/10.1162/tacl_a_00041.
  • Bostrom, Nick, and Eliezer Yudkowsky. 2018. “The Ethics of Artificial Intelligence.” In , 57–69. https://doi.org/10.1201/9781351251389-4.
  • Boyles, Robert James M. 2017. “Philosophical Signposts for Artificial Moral Agent Frameworks.” Suri 6 (2): 92–109.
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant. 2017. “Of, for, and by the People: The Legal Lacuna of Synthetic Persons.” Artificial Intelligence and Law 25 (3): 273–91. https://doi.org/10.1007/s10506-017-9214-9.
  • Bunn, Jenny. 2020. “Working in Contexts for Which Transparency Is Important: A Recordkeeping View of Explainable Artificial Intelligence (XAI).” Records Management Journal 30 (2): 143–53. https://doi.org/10.1108/RMJ-08-2019-0038.
  • Coogan, Michael. 2014. The Ten Commandments: A Short History of an Ancient Text. Yale University Press. https://www.jstor.org/stable/j.ctt5vkqht.
  • Deutsch, David. 2012. “Philosophy Will Be the Key That Unlocks Artificial Intelligence.” The Guardian, October 3, 2012, sec. Science. https://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence.
  • Dunn, Ashley. 2009. “Machine Intelligence, Part II: From Bumper Cars to Electronic Minds.” 2009. https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/0605surf.html.
  1. 2024. “EU Artificial Intelligence Act | Up-to-Date Developments and Analyses of the EU AI Act.” 2024. https://artificialintelligenceact.eu/.
  • Evans, Woody. 2015. “Posthuman Rights: Dimensions of Transhuman Worlds.” Teknokultura. Revista de Cultura Digital y Movimientos Sociales 12 (2): 373–84. https://doi.org/10.5209/rev_TK.2015.v12.n2.49072.
  • Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press. https://doi.org/10.1093/oso/9780198883098.001.0001.
  • Floridi, Luciano, and Josh Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1 (1). https://doi.org/10.1162/99608f92.8cd550d1.
  • Fox, Stuart. 2009. “Evolving Robots Learn To Lie To Each Other.” Popular Science. August 19, 2009. https://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other/.
  • Gabriel, Iason. 2018. “The Case for Fairer Algorithms.” Medium (blog). March 14, 2018. https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8.
  • Google. 2024. “Google AI Principles.” Google AI. 2024. https://ai.google/responsibility/principles/.
  • Guadamuz, Andres. 2017. “Artificial Intelligence and Copyright.” 2017. https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html.
  • Hart, David, and Ben Goertzel. 2016. “OpenCog: A Software Framework for Integrative Artificial General Intelligence.” https://web.archive.org/web/20160304205408/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.621&rep=rep1&type=pdf.
  • Hatmaker, Taylor. 2017. “Saudi Arabia Bestows Citizenship on a Robot Named Sophia.” TechCrunch (blog). October 26, 2017. https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/.
  • Henderson, Mark. 2007. “Human Rights for Robots? We’re Getting Carried Away,” 2007, sec. unknown section. https://www.thetimes.co.uk/article/human-rights-for-robots-were-getting-carried-away-xfbdkpgwn0v.
  • Hibbard, Bill. 2016. “Open Source AI.” https://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf.
  • House of Lords. 2017. “AI in the UK: Ready, Willing and Able? – Artificial Intelligence Committee.” 2017. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.
  • Joint Research Centre (European Commission), S. Samoili, M. López Cobo, E. Gómez, G. De Prato, F. Martínez-Plumed, and B. Delipetrev. 2020. AI Watch: Defining Artificial Intelligence : Towards an Operational Definition and Taxonomy of Artificial Intelligence. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/382730.
  • Kritikos, Mihalis. 2019. “Artificial Intelligence Ante Portas: Legal & Ethical Reflections | Think Tank | European Parliament.” 2019. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2019)634427.
  • Law Library of Congress (U.S.), ed. 2019. Regulation of Artificial Intelligence in Selected Jurisdictions. Washington, D.C: The Law Library of Congresss, Staff of the Global Legal Research Directorate.
  • McCarthy, John. 2000. Defending AI Research: A Collection of Essays and Reviews. CSLI Publications.
  • Metz, Cade. 2016. “Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free.” Wired, 2016. https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/.
  • Moor, James H. 2009. “Four Kinds of Ethical Robots | Issue 72 | Philosophy Now.” 2009. https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots.
  • Moor, J.H. 2006. “The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent Systems 21 (4): 18–21. https://doi.org/10.1109/MIS.2006.80.
  • Müller, Vincent C. 2023. “Ethics of Artificial Intelligence and Robotics.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Fall 2023. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/.
  • Musgrave, Zach, and Bryan W. Roberts. 2015. “Why Humans Need To Ban Artificially Intelligent Weapons.” The Atlantic (blog). August 14, 2015. https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/.
  • OECD. 2024a. “AI-Principles Overview.” 2024. https://oecd.ai/en/principles.
  • ———. 2024b. “OECD Legal Instruments.” 2024. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
  • Palmer, Jason. 2009. “Call for Debate on Killer Robots,” August 3, 2009. http://news.bbc.co.uk/2/hi/technology/8182003.stm.
  • Partnership on AI. 2024. “Partnership on AI.” Partnership on AI. 2024. https://partnershiponai.org/about/.
  • Russell, Stuart, and Peter Norvig. 2016. “Artificial Intelligence: A Modern Approach, 4th US Ed.” 2016. https://aima.cs.berkeley.edu/.
  • Santos-Lang, Christopher. 2015. “Moral Ecology Approaches to Machine Ethics.” Intelligent Systems, Control and Automation: Science and Engineering 74 (January):111–27. https://doi.org/10.1007/978-3-319-08108-3_8.
  • Sfetcu, Nicolae. 2021. Introducere în inteligența artificială. Nicolae Sfetcu. https://www.telework.ro/ro/e-books/introducere-in-inteligenta-artificiala/.
  • ———. 2024. Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence. Bucharest, Romania: MultiMedia Publishing. https://www.telework.ro/en/e-books/intelligence-from-natural-origins-to-artificial-frontiers-human-intelligence-vs-artificial-intelligence/.
  • Shinn, Lora. 2021. “Everything You Need to Know About Insurance for Self-Driving Cars.” The Balance. 2021. https://www.thebalancemoney.com/self-driving-cars-and-insurance-what-you-need-to-know-4169822.
  • Sotala, Kaj, and Roman V. Yampolskiy. 2014. “Responses to Catastrophic AGI Risk: A Survey.” Physica Scripta 90 (1): 018001. https://doi.org/10.1088/0031-8949/90/1/018001.
  • UN 2017. “UN Artificial Intelligence Summit Aims to Tackle Poverty, Humanity’s ‘grand Challenges’ | UN News.” June 7, 2017. https://news.un.org/en/story/2017/06/558962.
  • UNESCO. 2021. “The Race against Time for Smarter Development | 2021 Science Report.” 2021. https://www.unesco.org/reports/science/2021/en.
  • Université de Montréal. 2017. “Déclaration de Montréal IA responsable.” Déclaration de Montréal IA responsable. 2017. https://declarationmontreal-iaresponsable.com/.
  • Vincent, James. 2016. “Satya Nadella’s Rules for AI Are More Boring (and Relevant) than Asimov’s Three Laws.” The Verge. June 29, 2016. https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws.
  • Waldrop, M. Mitchell. 1987. “A Question of Responsibility.” AI Magazine 8 (1): 28–28. https://doi.org/10.1609/aimag.v8i1.572.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman.
  • Winfield, Alan. 2011. “Five Roboethical Principles – for Humans.” New Scientist. 2011. https://www.newscientist.com/article/mg21028111-100-five-roboethical-principles-for-humans/.
  • Winfield, Alan F., Katina Michael, Jeremy Pitt, and Vanessa Evers. 2019. “Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue].” Proceedings of the IEEE 107 (3): 509–17. https://doi.org/10.1109/JPROC.2019.2900622.
  • Yukdowsky, Eliezer. 2008. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Global Catastrophic Risks. https://ui.adsabs.harvard.edu/abs/2008gcr..book..303Y.

 

CC BY SA 4.0Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).

Follow Nicolae Sfetcu:
Asociat şi manager MultiMedia SRL și editura MultiMedia Publishing. Partener cu MultiMedia în mai multe proiecte de cercetare-dezvoltare la nivel naţional şi european Coordonator de proiect European Teleworking Development Romania (ETD) Membru al Clubului Rotary București Atheneum Cofondator şi fost preşedinte al Filialei Mehedinţi al Asociaţiei Române pentru Industrie Electronica şi Software Oltenia Iniţiator, cofondator şi preşedinte al Asociaţiei Române pentru Telelucru şi Teleactivităţi Membru al Internet Society Cofondator şi fost preşedinte al Filialei Mehedinţi a Asociaţiei Generale a Inginerilor din România Inginer fizician - Licenţiat în Științe, specialitatea Fizică nucleară. Master în Filosofie. Cercetător - Academia Română - Comitetul Român de Istoria și Filosofia Științei și Tehnicii (CRIFST), Divizia de Istoria Științei (DIS) ORCID: 0000-0002-0162-9973

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *