The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility

Sfetcu, Nicolae (2024), The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility, IT & C, 3:4, DOI: 10.58679/IT38020, https://www.internetmobile.ro/the-ethics-of-artificial-intelligence-balancing-innovation-and-responsibility/

 

Abstract

This is an extended article based on the book Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence (Sfetcu 2024). As AI systems evolve and become more pervasive in our daily lives, the ethical considerations surrounding their development, deployment, and impact have come to the forefront. The ethics of artificial intelligence span a broad spectrum of issues, ranging from privacy and bias to accountability and the societal consequences of automation. This articledelves into these ethical dimensions, highlighting the importance of adopting a balanced approach that encourages innovation while protecting human values and rights.

Keywords: artificial intelligence, ethics, responsibility, ethical principles, ethical challenges, robots

Etica inteligenței artificiale: echilibrarea inovației și a responsabilității

Rezumat

Acesta este un articol extins având ca sursă cartea Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence (Sfetcu 2024). Pe măsură ce sistemele de inteligență artificială evoluează și devin mai răspândite în viața noastră de zi cu zi, considerentele etice legate de dezvoltarea, implementarea și impactul lor au ajuns în prim-plan. Etica inteligenței artificiale acoperă un spectru larg de probleme, de la confidențialitate și părtinire până la responsabilitate și consecințele societale ale automatizării. Acest articol analizează aceste dimensiuni etice, subliniind importanța adoptării unei abordări echilibrate care încurajează inovația, protejând în același timp valorile și drepturile omului.

Cuvinte cheie: inteligența artificială, etică, responsabilitate, principii etice, provocări etice, roboți

 

IT & C, Volumul 3, Numărul 4, Decembrie 2024, pp.
ISSN 2821 – 8469, ISSN – L 2821 – 8469, DOI: 10.58679/IT38020
URL: https://www.internetmobile.ro/the-ethics-of-artificial-intelligence-balancing-innovation-and-responsibility/
© 2024 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility

Ing. fiz. Nicolae SFETCU[1], MPhil

nicolae@sfetcu.com

[1] Cercetător – Academia Română – Comitetul Român de Istoria și Filosofia Științei și Tehnicii (CRIFST), Divizia de Istoria Științei (DIS), ORCID: 0000-0002-0162-9973

 

Introduction

Artificial intelligence (AI) has emerged as a pivotal technology of the 21st century, propelling innovation across a wide array of sectors, including healthcare, finance, transportation, and more. As AI systems evolve and become more pervasive in our daily lives, the ethical considerations surrounding their development, deployment, and impact have come to the forefront. The ethics of artificial intelligence span a broad spectrum of issues, ranging from privacy and bias to accountability and the societal consequences of automation. This essay delves into these ethical dimensions, highlighting the importance of adopting a balanced approach that encourages innovation while protecting human values and rights.

Both human and artificial intelligence have made remarkable progress in recent decades. Human intelligence has led to innovations in science, technology, art, and government, shaping the world in profound ways. Meanwhile, AI has revolutionized industries such as healthcare, finance, transportation and entertainment, providing unprecedented capabilities in data analysis, automation and decision-making.

However, these advances also raise ethical considerations and societal implications. Ethical dilemmas regarding AI governance, mitigating bias, and preserving privacy require urgent attention. As AI systems become increasingly autonomous, ensuring transparency, accountability and fairness is critical. Additionally, the socioeconomic ramifications of widespread AI adoption deserve careful deliberation. While AI has the potential to enhance human capabilities and alleviate societal challenges, it also poses risks, such as replacing jobs and exacerbating inequality. Navigating these complex ethical and societal issues will require collaborative efforts by policymakers, technologists, and stakeholders from different fields.

The ethics of artificial intelligence involves two aspects: the moral behavior of humans in the design, manufacture, use and treatment of artificially intelligent systems, and the behavior (ethics) of machines, including the case of a possible singularity due to superintelligent AI. (Müller 2023)

Robot ethics („roboetics”) deals with the design, construction, use, and treatment of robots as physical machines. Not all robots use AI systems and not all AI systems are robots. (Müller 2023)

Machine ethics (or machine morality) deals with the design of artificial moral agents (AMAs), robots or artificially intelligent computers that behave morally or as if they were moral (Anderson and Anderson 2011). Common characteristics of the agent in philosophy, such as the rational agent, the moral agent, and the artificial agent, are related to the concept of AMA (Boyles 2017).

Machine ethics deals with adding or ensuring moral behaviors to machines using artificial intelligence (artificially intelligent agents) (J.H. Moor 2006).

Isaac Asimov in 1950 in I, Robot proposed the three laws of robotics, then tested the limits of these laws (Asimov 2004).

James H. Moor defines four types of ethical robots: ethical impact agent, implicit ethical agent (to avoid unethical outcomes), explicit ethical agent (process scenarios and act on ethical decisions), and fully ethical agent (able to makes ethical decisions, in addition to human metaphysical traits). A machine can include several such types (James H. Moor 2009).

The term „machine ethics” was coined by Mitchell Waldrop in the 1987 IA journal article „A Question of Responsibility”:

„Intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics” (Waldrop 1987)

To increase efficiency and avoid bias, Nick Bostrom and Eliezer Yudkowsky argued for decision trees over neural networks and genetic algorithms, as decision trees conform to modern social norms of transparency and predictability (Bostrom and Yudkowsky 2018). Chris Santos-Lang championed neural networks and genetic algorithms (Santos-Lang 2015). In 2009, in an experiment, AI robots were programmed to cooperate with each other using a genetic algorithm. The robots then learned to lie to each other in an attempt to hoard resources from other robots (S. Fox 2009), but they also engaged in altruistic behavior by signaling danger to each other and even giving their lives to save other robots. The ethical implications of this experiment have been contested by machine ethicists.

In 2009, at a conference, it was discussed that some machines have acquired various forms of semi-autonomy; also, some computer viruses can avoid their removal and have acquired „bug intelligence” (S. Fox 2009).

There is currently a heated debate about the use of autonomous robots in military combat (Palmer 2009), and the integration of general artificial intelligence into existing legal and social frameworks (Sotala and Yampolskiy 2014).

Nayef Al-Rodhan mentions the case of neuromorphic chips, a technology that could support the moral competence of robots (Al-Rodhan 2015).

Ethical principles of AI

AI decision-making raises questions of legal responsibility and copyright status of created works (Guadamuz 2017). Friendly AI involves machines designed to minimize risks and make choices that benefit humans (Yukdowsky 2008). The field of machine ethics provides principles and procedures for solving ethical dilemmas, founded at an AAIA symposium in 2005 (AAAI 2014).

As AI systems evolve to become more autonomous and capable of making decisions with significant ramifications, the issue of accountability becomes increasingly critical. Who bears responsibility when an AI system makes an error or inflicts harm? Is it the developers responsible for crafting the system, the organizations that deploy it, or the AI system itself? These questions become particularly complex in scenarios where AI systems operate within intricate environments, such as autonomous vehicles or healthcare diagnostics, where errors can have life-threatening consequences.

From an ethical standpoint, it is essential to establish definitive accountability frameworks for AI systems. This includes the establishment of mechanisms that allow individuals to seek redress when adversely affected by AI decisions. It also involves the development of standards for the creation and deployment of AI systems, including rigorous testing and validation procedures, to mitigate the risk of harm. Moreover, the potential for AI systems to be used as a justification for decisions that would otherwise be deemed unethical by attributing them to an „impartial” machine warrants careful examination to prevent the relinquishment of human responsibility.

The regulation of artificial intelligence is the development of public sector policies and laws to promote and regulate artificial intelligence and by implication algorithms, an emerging issue in jurisdictions globally (Law Library of Congress (U.S.) 2019). Between 2016 and 2020, more than 30 countries adopted dedicated AI strategies. Most EU member states have launched national AI strategies, as have Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, the United Arab Emirates, the US and Vietnam. Others are in the process of developing their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating the need for AI to be developed in line with human rights and democratic values (UNESCO 2021), to ensure public trust in the technology. In the US, Henry Kissinger, Eric Schmidt and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI (Sfetcu 2021).

In the review of 84 ethics guidelines for AI, 11 groups of principles were found: transparency, justice and fairness, non-maleficence, responsibility, confidentiality, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity (Jobin, Ienca, and Vayena 2019).

Luciano Floridi and Josh Cowls created an ethical framework of AI principles based on four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI-enabling principle – explainability (Floridi and Cowls 2019).

Bill Hibbard argues that AI developers have an ethical obligation to be transparent in their work (Hibbard 2016). Ben Goertzel and David Hart created OpenCog as an open-source framework for AI development (Hart and Goertzel 2016). OpenIA is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity (Metz 2016).

Many researchers recommend government regulation as a means of ensuring transparency, although critics worry that it will slow the rate of innovation (UN 2017).

There is a huge volume of proposed ethical principles for AI—already more than 160 in 2020, according to Algorithm Watch’s global inventory of AI Ethics Guidelines (AlgorithmWatch 2024), which threatens to overwhelm and confuse.

On June 26, 2019, the European Commission’s High Level Expert Group on Artificial Intelligence (IA HLEG) published „Policy and Investment Recommendations for Trustworthy AI”, covering four main topics: people and society at large, research and academia, the private sector and the public sector. The European Commission claims that „the HLEG recommendations reflect both the opportunities that AI technologies can drive economic growth, prosperity and innovation, as well as the potential associated risks”, and states that the EU aims to lead the development of policies governing AI internationally. On April 21, 2021, the European Commission proposed the Artificial Intelligence Law (EU 2024).

According to Mihalis Kritikos (Kritikos 2019), the development of AI in a regulatory and ethical vacuum has sparked a series of debates about the need for its legal control and ethical oversight. AI-based algorithms that perform automated reasoning tasks appear to control increasing aspects of our lives by implementing institutional decision-making based on big data analysis and have, in fact, made this technology an influential standard-setter.

The impact of existing AI technologies on the exercise of human rights, from freedom of expression, freedom of assembly and association, the right to privacy, the right to work and the right to non-discrimination to equal protection of the law, must be carefully examined and qualified together with the potential of AI to exacerbate inequalities and widen the digital divide. Given AI’s potential to act in an autonomous manner, its sheer complexity and opacity, and the uncertainty surrounding its operation, make a comprehensive regulatory response essential to prevent ever-expanding applications from causing social harm in a very heterogeneous range of individuals and social groups.

Such a response should include obligating AI algorithm developers to fully respect the human rights and civil liberties of all users, maintaining uninterrupted human control over AI systems, addressing the effects of emotional connection and attachment between humans and robots and develop common standards against which a judicial authority using AI will be evaluated. It should also focus on the allocation of responsibilities, rights and duties and prevent the reduction of the legal governance process to a mere technical optimization of machine learning and algorithmic decision-making procedures. In this framework, new collective data rights must be introduced, which will protect the ability to refuse to be profiled, the right to appeal and the right to explanation in decision-making frameworks based on artificial intelligence.

In addition, legislators must ensure that organizations implementing and using these systems remain legally responsible for any caused harm and develop sustainable and proportionate informed consent protocols (Kritikos 2019).

2017 European Parliament Resolution on civil law rules in robotics – comprising a ‘code of ethical conduct for robotics engineers’, a ‘code for research ethics committees’, a ‘designers’ license’ and a ‘users’ license ” can serve as a governance model for a detailed process-based architecture of AI technology ethics (Kritikos 2019).

The European Commission appointed the High-Level Expert Group (HLEG) on AI in 2018, one of their tasks is to define ethical guidelines for trustworthy AI. For an AI system to be reliable, it should ensure the following three components throughout the entire life cycle of the system (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Lawfully, in compliance with all applicable laws and regulations,
  2. Ethical, ensuring adherence to ethical principles and values,
  3. Robust, technically and socially.

The four ethical principles that are rooted in fundamental rights that must be respected to ensure that AI systems are developed, implemented and used in a trustworthy way are (Joint Research Centre (European Commission), Samoili, et al. 2020):

  • Respect for human autonomy,
  • Preventing harm,
  • Fairness,

These are reflected in legal requirements (the scope of legal AI, which is the first component of trusted AI).

Stakeholder responsibilities (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Developers: must implement and apply the requirements of the design and development processes.
  2. Implementers: must ensure that the systems they use and the products and services they offer meet the requirements.
  3. End users and society in general: must be informed of these requirements and be able to demand that they be respected.

Requirements that include systemic, individual and societal aspects (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. Human agency and surveillance, including fundamental rights and human surveillance.
  2. Technical robustness and security, including security and resistance to attacks, fallback plan and overall security, accuracy, reliability and reproducibility.
  3. Data privacy and governance, including respect for confidentiality, data quality and integrity, and access to data.
  4. Transparency, including traceability, explainability and communication.
  5. Diversity, non-discrimination and fairness, including avoidance of unfair bias, accessibility and universal design and stakeholder participation.
  6. Societal and environmental well-being, including sustainability and environmental friendliness, social impact, society and democracy.

Implementation of these requirements should occur throughout the entire life cycle of an AI system and depends on the specific application.

In June 2016, Satya Nadella, CEO of Microsoft Corporation, in an interview with Slate magazine recommended the following principles and goals for artificial intelligence (Vincent 2016):

  • „AI must be designed to assist humanity”, i.e. human autonomy must be respected.
  • “AI must be transparent,” meaning that people should know and be able to understand how it works.
  • „AI must maximize efficiency without destroying human dignity.”
  • “AI must be designed for intelligent privacy,” meaning it earns trust by protecting their information.
  • “AI must have algorithmic accountability” so humans can undo unintended harm.
  • “AI must guard against bias” so that it does not discriminate against humans.

In 2017, the Asilomar AI principles (Asilomar 2017) were embraced by an impressive list of 1273 AI/robotics researchers and others (Joint Research Centre (European Commission), Samoili, et al. 2020):

  • Provide a broad framework about research objectives, funding and policy linkage.
  • Ethics and values consider safety, transparency, judicial transparency, accountability, human values, personal privacy, common benefits, human control, AI arms race, etc.
  • Discuss the long-term issues and risk of possible superintelligence, mitigation of the threat posed by AI systems, etc.

The OECD Principles for AI (OECD 2024a) identify five complementary principles (Joint Research Centre (European Commission), Samoili, et al. 2020):

  1. AI should benefit all people by fostering inclusive growth, sustainable development and well-being.
  2. Artificial intelligence systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based results and can challenge them.
  4. AI systems must operate in a robust and secure manner throughout their life cycle, and potential risks should be continuously assessed and managed.
  5. Organizations and individuals developing, implementing or operating AI systems should be held accountable for their proper functioning in accordance with the above principles.

Other sets of principles adopted to date (Floridi 2023):

  • The Montreal Declaration(Université de Montréal 2017) developed under the auspices of the University of Montreal following the Forum on the Socially Responsible Development of AI in November 2017;
  • OECD developed Council recommendations on AI(OECD 2024b);
  • The „five general principles for an AI code” from the report of the United Kingdom House of Lords Select Committee on Artificial Intelligence(House of Lords 2017, par. 417) published in April 2018;
  • Partnership Principles on AI(Partnership on AI 2024) published in collaboration with academics, researchers, civil society organizations, companies building and using AI technology and other groups.
  • China has released its own AI principles, called the Beijing AI Principles.
  • The Google AI Principles(Google 2024) focus on building socially beneficial artificial intelligence.

Many AI ethics committees have been formed, including the Stanford Institute for Human-Centered AI (HAI), the Alan Turing Institute, the AI Partnership, IA Now, IEEE, and others. Research advances make it possible to develop evaluation frameworks for fairness, transparency and accountability (Joint Research Centre (European Commission), Samoili, et al. 2020).

Ethical challenges of AI

If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it might have certain rights (Russell and Norvig 2016, 964) on a common spectrum with animal rights and human (Henderson 2007).

Many academics and governments dispute the idea that AI can be held accountable per se (Bryson, Diamantis, and Grant 2017). Also, some experts and academics disagree with the use of robots in military combat, especially if they have autonomous functions (Palmer 2009).

Currently attempts are being made to create tests to see if an AI is capable of making ethical decisions. The Turing test is considered insufficient. A specific proposed test is the Ethical Turing Test, where several judges decide whether the AI’s decision is ethical or unethical (A. F. Winfield et al. 2019).

Privacy and surveillance: A critical ethical issue associated with AI is privacy. AI systems frequently depend on extensive data to operate efficiently, leading to the collection, storage, and analysis of personal information on an unprecedented scale. This data encompasses a variety of aspects, including online activities, purchasing patterns, biometric data, and personal communications. The potential for the misuse of this data, whether by governmental entities, corporations, or nefarious actors, poses significant ethical concerns.

Surveillance is a particularly contentious area. AI-driven surveillance technologies, such as facial recognition, are capable of monitoring and tracking individuals without their consent, thereby violating privacy rights. In certain instances, these technologies have been implemented in ways that disproportionately affect marginalized communities, furthering existing social disparities. The ethical challenge here is to reconcile the potential advantages of such technologies, such as improved security and crime prevention, with the imperative to safeguard individual privacy and prevent abuses of authority.

Addressing bias and fairness: : AI systems are vulnerable to biases and errors introduced by their human creators and the data used to train these systems (Gabriel 2018). One solution to addressing bias is to create documentation for the data used to train AI systems (Bender and Friedman 2018).

Artificial Intelligence (AI) systems are frequently praised for their capacity to make decisions grounded in data rather than the subjective nature of human intuition, which is susceptible to errors and biases. Nonetheless, it is imperative to acknowledge that AI is not exempt from these biases. In fact, AI systems have the potential to perpetuate and even exacerbate the biases inherent in the data upon which they are trained. For example, if an AI system is trained on historical hiring data that reflects gender or racial biases, it may inadvertently replicate these biases in its recommendations, resulting in inequitable outcomes.

The ethical challenge at hand is to ensure that AI systems are developed and implemented in a manner that fosters fairness and avoids discrimination against individuals or groups. This necessitates a meticulous consideration of the data utilized for training AI systems, alongside continuous monitoring to identify and rectify biases. Furthermore, it prompts inquiries regarding the transparency of AI’s decision-making processes and the capacity of individuals to contest and comprehend these decisions.

Robot rights: A concept that humans should have moral obligations to their machines, similar to human rights or animal rights (W. Evans 2015). Thus, in October 2017, the android Sophia was granted citizenship in Saudi Arabia (Hatmaker 2017). The philosophy of sentientism accords degrees of moral consideration to all sentient beings, including artificial intelligence if it is proven to be sentient.

Unlike humans, AGIs can be copied in any number of copies. Are the copied copies the same person or several different people? Do I get one vote or more? Is deleting one of the children a crime? Would treating AGI like any other computer programs constitute brainwashing, slavery and tyranny? (Deutsch 2012)

Threat to human dignity: Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace humans in positions that require respect and care (Weizenbaum 1976). John McCarthy contradicts him: „When moralizing is both vehement and vague, it invites authoritarian abuse” (McCarthy 2000).

Artificial intelligence stands poised to significantly alter the employment landscape, with the threat of job displacement looming large across various sectors, from manufacturing to customer service. While proponents argue that AI will spawn new employment opportunities and stimulate economic growth, there are concerns regarding the potential social and economic upheaval stemming from widespread job displacement. The ethical imperative here is to navigate the transition towards an AI-centric economy in a manner that minimizes adverse effects and guarantees the equitable distribution of AI’s benefits.

This necessitates the implementation of policies aimed at supporting workers displaced by automation, including retraining initiatives and social safety nets. Furthermore, a comprehensive examination of the societal ramifications of AI is essential. For instance, the concentration of AI research and development within a select few large technology corporations raises issues concerning power centralization and the possibility of AI exacerbating existing societal inequalities. Consequently, the ethical development of AI should encompass efforts to democratize access to AI technologies and ensure their utilization for societal welfare, rather than merely for-profit maximization.

Liability for self-driving cars: There is a debate about legal liability in the event of an accident. If a driverless car hit a pedestrian, who was to blame for the accident: the driver, the pedestrian, the builder, or the government?  (Shinn 2021)

Weapons that include AI: Many experts and academics oppose the use of autonomous robots in military combat. There is a possibility that robots will develop the ability to make their own logical decisions in killing. This includes autonomous drones. Stephen Hawking and Max Tegmark signed a „Future of Life” petition (Asilomar 2017) to ban AI-equipped weapons (Musgrave and Roberts 2015).

Opaque algorithms: Machine learning with neural networks can lead to AI decisions that the humans who programmed them cannot explain. Explainable AI encompasses both explainability (summarizing neural network behavior and increasing user confidence) and interpretability (understanding what a model has done or could do) (Bunn 2020).

Regardless of whether we are talking about a weak AI or a strong AI (AGI), the imposition of norms, compliance with them can take three possible directions: a) strict compliance with these norms; b) own different interpretation of the imposed norms (with the possibility of deviating from the projected objectives); and c) (in the case of AGI only) developing very different norms and ethics of their own.

The laws of robots

The first ethical laws are considered to be the 10 commandments present three times in the Old Testament, being dictated according to the Bible by God to Moses (Coogan 2014, 27, 33), a set of biblical principles related to ethics and worship originating in the Jewish tradition which plays a fundamental role in Judaism and Christianity.

The best-known set of laws for robots are those written by Isaac Asimov in the 1940s, introduced in his 1942 short story „Runaround”:

  1. A robot cannot harm a human being or, through inaction, allow a human being to do harm.
  2. A robot must obey orders given by human beings, unless such orders conflict with the First Law.
  3. A robot must protect its own existence if such protection does not conflict with the First or Second Law(Asimov 2004).

In The Evitable Conflict, the First Law for Machines is generalized:

  1. No machine can harm humanity; or, by inaction, cannot allow harm to be done to mankind.

In Foundation and Earth, a zero law was introduced, with the original three duly rewritten as subordinate to it:

  1. A robot cannot harm humanity or, through inaction, allow humanity to be harmed.

In 2011, the UK’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published a set of five „ethical principles for designers, builders and users of robots” in the real world, along with seven ‘high-level messages’ (A. Winfield 2011):

Ethical principles:

  1. Robots should not be designed solely or primarily to kill or injure humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that ensure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or addiction. It should always be possible to distinguish between a robot and a human.
  5. It should always be possible to find out who is legally responsible for a robot.

A terminology for the legal assessment of robot developments is being implemented in Asian countries (BBC 2007).

Mark W. Tilden proposed a series of principles/rules for robots (Dunn 2009):

  1. A robot must protect its existence at all costs.
  2. A robot must obtain and maintain access to its own power source.
  3. A robot must continuously search for better energy sources.

Ethical machines

Friendly AI are machines that have been designed from the ground up to minimize risk and make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and must be completed before AI becomes an existential risk.

Intelligent machines have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for solving ethical dilemmas. Machine ethics is also called machine morality, computational ethics, or computational morality, and was founded at an AAIA symposium in 2005.

Other approaches include Wendell Wallach’s „artificial moral agents” and Stuart J. Russell’s three principles for developing machines that prove beneficial (Sfetcu 2021).

Conclusion

As AI technology progresses, the distinction between human and machine-based decision-making is becoming increasingly indistinct. This phenomenon raises profound ethical considerations regarding the future of human autonomy and agency. For example, if AI systems can make decisions that are indistinguishable from those made by humans, what implications does this have for the concept of free will? Moreover, the integration of AI systems into various facets of everyday life poses a risk of individuals becoming excessively dependent on these technologies, potentially leading to a diminution of critical thinking skills and human judgment.

From an ethical standpoint, it is imperative to contemplate the long-term effects of AI on human society and culture. This includes the design of AI systems to augment, rather than detract from, human capabilities and values. It also involves the promotion of a public discourse on the societal role of AI and the encouragement of AI technology development that is in harmony with human interests.

In conclusion, the ethical considerations surrounding artificial intelligence are multifaceted and continuously evolving, addressing some of the most critical questions concerning the future of human society. As AI technology advances, it is crucial to approach its development and application with a robust ethical framework that balances innovation with responsibility. This includes the protection of privacy, the assurance of fairness, the establishment of accountability, the management of the societal impact of automation, and the preservation of human autonomy. By addressing these ethical challenges with foresight, we can leverage the transformative potential of AI while ensuring its benefits are realized for the greater good.

Bibliography

  • AAAI. 2014. “Machine Ethics.” November 29, 2014. https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06.
  • AlgorithmWatch. 2024. “AI Ethics Guidelines Global Inventory.” AlgorithmWatch. 2024. https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/.
  • Al-Rodhan, Nayef. 2015. “The Moral Code.” Foreign Affairs, August 12, 2015. https://www.foreignaffairs.com/moral-code.
  • Anderson, Michael, and Susan Leigh Anderson, eds. 2011. Machine Ethics. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511978036.
  • Asilomar. 2017. “Asilomar AI Principles.” Future of Life Institute (blog). 2017. https://futureoflife.org/open-letter/ai-principles/.
  • Asimov, Isaac. 2004. I, Robot. Bantam Books.
  • BBC. 2007. “Robotic Age Poses Ethical Dilemma,” March 7, 2007. http://news.bbc.co.uk/2/hi/technology/6425927.stm.
  • Bender, Emily M., and Batya Friedman. 2018. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (December):587–604. https://doi.org/10.1162/tacl_a_00041.
  • Bostrom, Nick, and Eliezer Yudkowsky. 2018. “The Ethics of Artificial Intelligence.” In , 57–69. https://doi.org/10.1201/9781351251389-4.
  • Boyles, Robert James M. 2017. “Philosophical Signposts for Artificial Moral Agent Frameworks.” Suri 6 (2): 92–109.
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant. 2017. “Of, for, and by the People: The Legal Lacuna of Synthetic Persons.” Artificial Intelligence and Law 25 (3): 273–91. https://doi.org/10.1007/s10506-017-9214-9.
  • Bunn, Jenny. 2020. “Working in Contexts for Which Transparency Is Important: A Recordkeeping View of Explainable Artificial Intelligence (XAI).” Records Management Journal 30 (2): 143–53. https://doi.org/10.1108/RMJ-08-2019-0038.
  • Coogan, Michael. 2014. The Ten Commandments: A Short History of an Ancient Text. Yale University Press. https://www.jstor.org/stable/j.ctt5vkqht.
  • Deutsch, David. 2012. “Philosophy Will Be the Key That Unlocks Artificial Intelligence.” The Guardian, October 3, 2012, sec. Science. https://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence.
  • Dunn, Ashley. 2009. “Machine Intelligence, Part II: From Bumper Cars to Electronic Minds.” 2009. https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/0605surf.html.
  1. 2024. “EU Artificial Intelligence Act | Up-to-Date Developments and Analyses of the EU AI Act.” 2024. https://artificialintelligenceact.eu/.
  • Evans, Woody. 2015. “Posthuman Rights: Dimensions of Transhuman Worlds.” Teknokultura. Revista de Cultura Digital y Movimientos Sociales 12 (2): 373–84. https://doi.org/10.5209/rev_TK.2015.v12.n2.49072.
  • Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press. https://doi.org/10.1093/oso/9780198883098.001.0001.
  • Floridi, Luciano, and Josh Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1 (1). https://doi.org/10.1162/99608f92.8cd550d1.
  • Fox, Stuart. 2009. “Evolving Robots Learn To Lie To Each Other.” Popular Science. August 19, 2009. https://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other/.
  • Gabriel, Iason. 2018. “The Case for Fairer Algorithms.” Medium (blog). March 14, 2018. https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8.
  • Google. 2024. “Google AI Principles.” Google AI. 2024. https://ai.google/responsibility/principles/.
  • Guadamuz, Andres. 2017. “Artificial Intelligence and Copyright.” 2017. https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html.
  • Hart, David, and Ben Goertzel. 2016. “OpenCog: A Software Framework for Integrative Artificial General Intelligence.” https://web.archive.org/web/20160304205408/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.621&rep=rep1&type=pdf.
  • Hatmaker, Taylor. 2017. “Saudi Arabia Bestows Citizenship on a Robot Named Sophia.” TechCrunch (blog). October 26, 2017. https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/.
  • Henderson, Mark. 2007. “Human Rights for Robots? We’re Getting Carried Away,” 2007, sec. unknown section. https://www.thetimes.co.uk/article/human-rights-for-robots-were-getting-carried-away-xfbdkpgwn0v.
  • Hibbard, Bill. 2016. “Open Source AI.” https://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf.
  • House of Lords. 2017. “AI in the UK: Ready, Willing and Able? – Artificial Intelligence Committee.” 2017. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.
  • Joint Research Centre (European Commission), S. Samoili, M. López Cobo, E. Gómez, G. De Prato, F. Martínez-Plumed, and B. Delipetrev. 2020. AI Watch: Defining Artificial Intelligence : Towards an Operational Definition and Taxonomy of Artificial Intelligence. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/382730.
  • Kritikos, Mihalis. 2019. “Artificial Intelligence Ante Portas: Legal & Ethical Reflections | Think Tank | European Parliament.” 2019. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2019)634427.
  • Law Library of Congress (U.S.), ed. 2019. Regulation of Artificial Intelligence in Selected Jurisdictions. Washington, D.C: The Law Library of Congresss, Staff of the Global Legal Research Directorate.
  • McCarthy, John. 2000. Defending AI Research: A Collection of Essays and Reviews. CSLI Publications.
  • Metz, Cade. 2016. “Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free.” Wired, 2016. https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/.
  • Moor, James H. 2009. “Four Kinds of Ethical Robots | Issue 72 | Philosophy Now.” 2009. https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots.
  • Moor, J.H. 2006. “The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent Systems 21 (4): 18–21. https://doi.org/10.1109/MIS.2006.80.
  • Müller, Vincent C. 2023. “Ethics of Artificial Intelligence and Robotics.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Fall 2023. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/.
  • Musgrave, Zach, and Bryan W. Roberts. 2015. “Why Humans Need To Ban Artificially Intelligent Weapons.” The Atlantic (blog). August 14, 2015. https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/.
  • OECD. 2024a. “AI-Principles Overview.” 2024. https://oecd.ai/en/principles.
  • ———. 2024b. “OECD Legal Instruments.” 2024. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
  • Palmer, Jason. 2009. “Call for Debate on Killer Robots,” August 3, 2009. http://news.bbc.co.uk/2/hi/technology/8182003.stm.
  • Partnership on AI. 2024. “Partnership on AI.” Partnership on AI. 2024. https://partnershiponai.org/about/.
  • Russell, Stuart, and Peter Norvig. 2016. “Artificial Intelligence: A Modern Approach, 4th US Ed.” 2016. https://aima.cs.berkeley.edu/.
  • Santos-Lang, Christopher. 2015. “Moral Ecology Approaches to Machine Ethics.” Intelligent Systems, Control and Automation: Science and Engineering 74 (January):111–27. https://doi.org/10.1007/978-3-319-08108-3_8.
  • Sfetcu, Nicolae. 2021. Introducere în inteligența artificială. Nicolae Sfetcu. https://www.telework.ro/ro/e-books/introducere-in-inteligenta-artificiala/.
  • ———. 2024. Intelligence, from Natural Origins to Artificial Frontiers – Human Intelligence vs. Artificial Intelligence. Bucharest, Romania: MultiMedia Publishing. https://www.telework.ro/en/e-books/intelligence-from-natural-origins-to-artificial-frontiers-human-intelligence-vs-artificial-intelligence/.
  • Shinn, Lora. 2021. “Everything You Need to Know About Insurance for Self-Driving Cars.” The Balance. 2021. https://www.thebalancemoney.com/self-driving-cars-and-insurance-what-you-need-to-know-4169822.
  • Sotala, Kaj, and Roman V. Yampolskiy. 2014. “Responses to Catastrophic AGI Risk: A Survey.” Physica Scripta 90 (1): 018001. https://doi.org/10.1088/0031-8949/90/1/018001.
  • UN 2017. “UN Artificial Intelligence Summit Aims to Tackle Poverty, Humanity’s ‘grand Challenges’ | UN News.” June 7, 2017. https://news.un.org/en/story/2017/06/558962.
  • UNESCO. 2021. “The Race against Time for Smarter Development | 2021 Science Report.” 2021. https://www.unesco.org/reports/science/2021/en.
  • Université de Montréal. 2017. “Déclaration de Montréal IA responsable.” Déclaration de Montréal IA responsable. 2017. https://declarationmontreal-iaresponsable.com/.
  • Vincent, James. 2016. “Satya Nadella’s Rules for AI Are More Boring (and Relevant) than Asimov’s Three Laws.” The Verge. June 29, 2016. https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws.
  • Waldrop, M. Mitchell. 1987. “A Question of Responsibility.” AI Magazine 8 (1): 28–28. https://doi.org/10.1609/aimag.v8i1.572.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman.
  • Winfield, Alan. 2011. “Five Roboethical Principles – for Humans.” New Scientist. 2011. https://www.newscientist.com/article/mg21028111-100-five-roboethical-principles-for-humans/.
  • Winfield, Alan F., Katina Michael, Jeremy Pitt, and Vanessa Evers. 2019. “Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue].” Proceedings of the IEEE 107 (3): 509–17. https://doi.org/10.1109/JPROC.2019.2900622.
  • Yukdowsky, Eliezer. 2008. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Global Catastrophic Risks. https://ui.adsabs.harvard.edu/abs/2008gcr..book..303Y.

 

CC BY SA 4.0Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).

A apărut revista ”IT & C, Volumul 3, Numărul 3, Septembrie 2024”

postat în: Știri 0

Revista IT & C, Volumul 3, Numărul 3, Septembrie 2024ISSN 2821-8469 ISSN-L 2821-8469, DOI: 10.58679/IT23696. Preț: 9,99 lei
IT & C (PDF, EPUB, MOBI pentru Kindle) https://www.internetmobile.ro/revista/revista-it-c-volumul-3-numarul-3-septembrie-2024/

Revista IT & C este o publicație trimestrială din domeniile tehnologiei informației și comunicații, și domenii conexe de studiu și practică.

Cuprins:

EDITORIAL / EDITORIAL

Tools Used in AI Development – The Turing Test
Instrumente utilizate în dezvoltarea IA – Testul Turing

TEHNOLOGIA INFORMAȚIEI / INFORMATION TECHNOLOGY

Trends in the Evolution of Artificial Intelligence – Intelligent Agents
Tendințe în evoluția inteligenței artificiale – Agenți inteligenți

TELECOMUNICAȚII / TELECOMMUNICATIONS

Security in 5G Telecommunications Networks with 5G Core
Securitatea în rețelele de telecomunicații 5G cu 5G Core

PROGRAMARE / PROGRAMMING

Alignment, Explainability and Confinement as Goals of Artificial Intelligence
Alinierea, explicabilitatea și confinarea ca obiective ale inteligenței artificiale

SECURITATE CIBERNETICĂ / CYBER SECURITY

Threats of Artificial Intelligence for Cybersecurity
Amenințări ale inteligenței artificiale pentru securitatea cibernetică

Instrumente utilizate în dezvoltarea IA – Testul Turing

IT & C - Descarcă PDFSfetcu, Nicolae (2024), Instrumente utilizate în dezvoltarea IA – Testul Turing, IT & C, 3:3, 3-10, DOI: 10.58679/IT18405, https://www.internetmobile.ro/instrumente-utilizate-in-dezvoltarea-ia-testul-turing/

 

Tools Used in AI Development – The Turing Test

Abstract

Artificial intelligence has become a cornerstone of modern technological innovation, propelling advances from self-driving to personalized medicine. Developing AI systems requires specialized tools and technologies, each tailored to facilitate different aspects of AI research and application. This essay provides an overview of the primary tools used and trends in AI development. Understanding these tools enriches our understanding of how AI solutions are created, refined and delivered across sectors.

Keywords: artificial intelligence, turing test, AI development, AI trends, machine learning, deep learning

Rezumat

Inteligența artificială a devenit o piatră de temelie a inovației tehnologice moderne, propulsând progresele de la conducerea automată la medicina personalizată. Dezvoltarea sistemelor AI necesită instrumente și tehnologii specializate, fiecare adaptată pentru a facilita diferite aspecte ale cercetării și aplicării AI. Acest eseu oferă o privire de ansamblu asupra instrumentelor primare utilizate și tendințelor în dezvoltarea AI. Înțelegerea acestor instrumente ne îmbogățește înțelegerea modului în care soluțiile AI sunt create, rafinate și livrate în diferite sectoare.

Cuvinte cheie: inteligența artificială, testul Turing, dezvoltarea IA, tendințe IA, învățarea automată, învățarea profundă

 

IT & C, Volumul 3, Numărul 3, Septembrie 2024, pp. 3-10
ISSN 2821 – 8469, ISSN – L 2821 – 8469, DOI: 10.58679/IT18405
URL: https://www.internetmobile.ro/instrumente-utilizate-in-dezvoltarea-ia-testul-turing/
© 2024 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

Instrumente utilizate în dezvoltarea IA – Testul Turing

Ing. fiz. Nicolae SFETCU[1], MPhil

nicolae@sfetcu.com

[1] Cercetător – Academia Română – Comitetul Român de Istoria și Filosofia Științei și Tehnicii (CRIFST), Divizia de Istoria Științei (DIS), ORCID: 0000-0002-0162-9973

 

Introducere

Inteligența artificială a devenit o piatră de temelie a inovației tehnologice moderne, propulsând progresele de la conducerea automată la medicina personalizată. Dezvoltarea sistemelor AI necesită instrumente și tehnologii specializate, fiecare adaptată pentru a facilita diferite aspecte ale cercetării și aplicării AI. Acest eseu oferă o privire de ansamblu asupra instrumentelor primare utilizate și tendințelor în dezvoltarea AI. Înțelegerea acestor instrumente ne îmbogățește înțelegerea modului în care soluțiile AI sunt create, rafinate și livrate în diferite sectoare.

Instrumente

Instrumente utilizate în dezvoltarea IA:

  • Căutarea și optimizarea: Raționamentul poate fi redus la efectuarea unei căutări și aplicarea unor reguli de inferență. Euristica furnizează „cea mai bună presupunere” pentru calea pe care se află soluția. În anii 1990 s-a dezvoltat o căutare bazată pe teoria matematică a optimizării. Calculul evolutiv folosește, de asemenea o formă de căutare de optimizare(Russell și Norvig 2016).
  • Logica: Folosită în special pentru reprezentarea cunoștințelor și rezolvarea de probleme, precum algoritmul satplan care folosește logica pentru planificare, sau programarea logică inductivă ca o metodă de învățare(Russell și Norvig 2016).
  • Metode probabilistice pentru raționamentul incert: Există situații în care agentul trebuie să opereze cu informații incomplete sau incerte. Pentru a rezolva aceste probleme se folosesc metode din teoria probabilității și economie, precum rețelele bayesiene. Algoritmii probabilistici pot fi utilizați și în fluxurile de date, pentru analiza proceselor (de exemplu, modele Markov ascunse sau filtre Kalman) (Russell și Norvig 2016).
  • Clasificatori și metode de învățare statistică: Aplicațiile IA pot fi împărțite în clasificatori (care clasifică) și controlere (care implică o acțiune). Clasificatorii utilizează potrivirea modelului pentru a determina cea mai apropiată potrivire. Un clasificator poate fi antrenat statistic sau prin învățarea automată, de ex. prin arborele de decizie care este cel mai simplu și mai utilizat algoritm simbolic de învățare automată, algoritmul K-al celui mai apropiat vecin, mașina vectorului suport (SVM) (Russell și Norvig 2016), clasificatorul naiv bayesian (Domingos 2015), rețelele neuronale (Russell și Norvig 2016), etc.
  • Rețele neuronale artificiale: Inspirate de arhitectura neuronilor din creierul uman, antrenate prin diverse tehnici, precum algoritmul de retropropagare(Russell și Norvig 2016), învățarea Hebbian, GMDH sau învățarea competitivă (Luger și Stubblefield 1993). Cele mai folosite rețele neuronale sune cele aciclice (feedforward, unde semnalul trece într-o singură direcție, precum perceptronii, perceptronii multistrat și rețelele pe bază radială (Russell și Norvig 2016)) și rețelele neuronale recurente (care permit feedback și memorii pe termen scurt).
  • Învățarea profundă: Folosește mai multe straturi de neuroni între intrările și ieșirile rețelei. Învățarea profundă a îmbunătățit performanța programelor în multe subdomenii IA(Ciregan, Meier, și Schmidhuber 2012, 3642–49), folosind rețele neuronale convoluționale unde un neuron primește input doar dintr-o zonă restrânsă a stratului anterior (Habibi Aghdam și Jahani Heravi 2017), și rețele neuronale recurente unde semnalul se va propaga printr-un strat de mai multe ori (Luger și Stubblefield 1993, 474–505).
  • Limbaje și hardware specializate: Limbaje specializate pentru inteligența artificială, precum Lisp, Prolog, TensorFlow, etc., și hardware precum acceleratoare IA și calcul neuromorf.

Învățarea automată (ML), ca subdomeniu IA, include algoritmi care construiesc un model bazat pe date eșantion, cunoscut sub denumirea de „date de antrenament”. ML poate fi împărțită în următoarele categorii (Joint Research Centre (European Commission), Samoili, et al. 2020):

  • Algoritmii de învățare supravegheată: mapează valorile de intrare la ieșire pe baza exemplelor etichetate de perechi intrare-ieșire. Are nevoie de cantități considerabile de date etichetate, adesea făcute de oameni.
  • Algoritmii de învățare nesupravegheată: ajută la găsirea modelelor necunoscute anterior în seturile de date fără etichete preexistente, prin gruparea articolelor similare în „clustere”.
  • Algoritmii de învățare semi-supravegheată: o categorie intermediară între învățarea supravegheată și nesupravegheată, în care datele conțin atât date etichetate, cât și neetichetate.
  • Învățarea prin consolidare: agenții întreprind acțiuni într-un mediu pentru a maximiza o recompensă.

Tendințe

Învățarea profundă și rețele neuronale: Învățarea profundă, activată de rețelele neuronale, permite algoritmilor să învețe modele complexe din cantități mari de date, realizând sarcini care anterior erau considerate dincolo de sfera mașinilor. Viziunea computerizată a avansat odată cu utilizarea rețelelor neuronale convoluționale, care pot învăța să extragă caracteristici și modele din datele vizuale (Sfetcu 2023).

Procesarea limbajului natural (NLP): Aplicațiile procesării limbajului natural, cum ar fi chatbot, asistenți vocali, traducere automată, analiză a sentimentelor, rezumat text, analiza sentimentelor și multe altele, au făcut interacțiunile cu computerele mai naturale și mai eficiente. NLP s-a îmbunătățit odată cu utilizarea modelelor de învățare profundă, cum ar fi transformatoarele, care pot capta informațiile semantice și sintactice ale limbajului natural.

Învățarea prin consolidare și sisteme autonome: O tehnică puternică de instruire a agenților IA pentru a lua decizii în medii dinamice. S-a dezvoltat cu ajutorul rețelelor neuronale profunde, care pot învăța politici și strategii complexe din date cu dimensiuni mari. Rețele adverse generative sunt un tip de rețea neuronală care poate genera date realiste și noi, cum ar fi imagini, videoclipuri, text și audio.

Etica și explicabilitatea IA: Cercetătorii și factorii de decizie politică lucrează pentru a aborda problema ”cutiei negre” prin promovarea transparenței, a răspunderii și a dezvoltării de modele IA explicabile. Cadrele etice de inteligență artificială urmăresc să se asigure că tehnologiile de inteligență artificială sunt dezvoltate și utilizate în mod responsabil, respectând valorile și drepturile omului.

Edge IA și învățarea federată: Utilizarea edge computing prin rularea algoritmilor IA pe dispozitive locale, mai degrabă decât să se bazeze doar pe infrastructura bazată pe cloud. Învățarea federată, un subset al edge IA, permite mai multor dispozitive să antreneze în colaborare un model IA partajat fără a partaja date brute, păstrând confidențialitatea utilizatorilor.

IA în asistența medicală: IA a făcut progrese semnificative în sectorul asistenței medicale, asistând profesioniștii din domeniul medical în diagnosticare, planificare a tratamentului, descoperirea medicamentelor și monitorizarea pacienților. Modelele de învățare automată analizează imagini medicale, prezic progresia bolii și identifică modele în datele pacienților pentru a oferi soluții personalizate de asistență medical (Sfetcu 2023).

Roboți: Dennett et al. afirmă că este puțin probabil ca cineva să creeze vreodată un robot care să fie conștient exact așa cum sunt oamenii, apelând la următoarele argumente (Daniel C. Dennett et al. 1994):

  • Roboții sunt lucruri pur materiale, iar conștiința necesită lucruri imateriale ale minții. (dualism tradițional)
  • Roboții sunt anorganici (prin definiție), iar conștiința poate exista doar într-un creier organic.
  • Roboții sunt artefacte, iar conștiința detestă un artefact; numai ceva natural, născut, nefabricat, ar putea prezenta o conștiință autentică.
  • Roboții vor fi întotdeauna mult prea simpli pentru a fi conștienți.

Testul Turing

René Descartes a prefigurat aspecte ale testului Turing în Discursul său din 1637, când a scris:

”Câte automate diferite sau mașini în mișcare ar putea fi fabricate de industria umană… Căci putem înțelege cu ușurință apariția unei mașini astfel încât să poată rosti cuvinte și chiar să emită unele răspunsuri la acțiunile de natură corporală asupra ei, care aduce o schimbare în organele sale; de exemplu, dacă este atinsă într-o anumită parte, poate întreba ce dorim să-i spunem; într-o altă parte poate exclama că este rănită, și așa mai departe. Dar nu se întâmplă niciodată să-și aranjeze vorbirea în diferite feluri, pentru a răspunde adecvat la tot ce se poate spune în prezența ei, așa cum poate face și cel mai de jos tip de om.” (René Descartes 1996, 34–35)

Descartes consideră că ceea ce separă umanul de automat este insuficiența răspunsului lingvistic adecvat, dar ia în considerare posibilitatea ca viitoarele automate să poată depăși această problemă.

În 1936, Alfred Ayer a luat în considerare problema altor minți: de unde știm că alți oameni au aceleași experiențe conștiente ca și noi? În Language, Truth and Logic a sugerat un protocol pentru a distinge între un om conștient și o mașină inconștientă:

„Singurul temei pe care îl pot avea pentru a afirma că un obiect care pare a fi conștient nu este cu adevărat o ființă conștientă, ci doar o mașină, este că nu reușește să satisfacă unul dintre testele empirice prin care este determinată prezența sau absența conștiinței.” (Ayer 1936)

Testul Turing, numit inițial jocul de imitație de Alan Turing în 1950 (Turing 1950), este un test al capacității unei mașini de a prezenta un comportament inteligent echivalent cu cel al unui om, sau care nu se poate distinge de acesta. Lucrarea a fost dezvoltată pe baza ideii lui Kurt Godel, conform căreia există afirmații despre calculul numerelor care sunt adevărate, dar care nu pot fi dovedite (Huang și Smith 2006). În test, un evaluator uman asistă la conversațiile în limbaj natural între un om și o mașină, fără a-i vedea. Conversația este afișată ca text. Dacă evaluatorul nu poate deosebi net mașina de om, se spune că mașina a trecut testul.

Testul din cartea lui Turing începe cu cuvintele: „Îmi propun să luăm în considerare întrebarea „Pot mașinile să gândească?””, dar, deoarece „gândirea” este greu de definit, Turing înlocuiește „întrebarea cu alta, care este strâns legată de ea și este exprimată în cuvinte relativ lipsite de ambiguitate,” (Turing 1950, 433) „Există computere digitale imaginabile care s-ar descurca bine în jocul de imitație?” (Turing 1950, 442)

În 1952 Turing propune o nouă versiune, în care un juriu pune întrebări unui computer și rolul acestuia este de a face o parte semnificativă a juriului să creadă că este cu adevărat un om (Weizenbaum 1966, 42).

Există mai multe limitări ale testului: de stocare, obiecții teologice, contra-argumente matematice, capacitatea inovativă și emoțională a computerului, etc. Problemele potențiale ale testului Turing țin de întrebarea dacă imitarea unui om dovedește de fapt inteligență (inteligența este posibilă fără a trece testul Turing), și în ce măsură trecerea testului Turing este o condiție suficientă.

Practic, testul Turing nu testează în mod direct dacă computerul se comportă inteligent, doar dacă computerul se comportă ca o ființă umană. Deci, testul nu poate fi folosit pentru a construi sau evalua sisteme care sunt mai inteligente decât oamenii. Unii cercetători susțin că încercarea de a trece testul Turing este doar o distragere a atenției de la o cercetare mai fructuoasă (Traiger 2000).

Lucrarea lui John Searle din 1980 Minds, Brains, and Programs a propus experimentul de gândire „camera chinezească” și a susținut că testul Turing nu poate fi folosit pentru a determina dacă o mașină poate gândi (J. R. Searle 1980).

Testul Turing. Sursa: Juan Alberto Sánchez Margallo/Wikimedia Commons, licența CC BY 2.5

Unul din contra-argumente („argumentul din conștiință”) susține că doar imitarea unui om nu ar fi suficientă, deoarece nu implică toate caracteristicile specifice oamenilor. Un alt contra-argument este „obiecția lui Lady Lovelace” conform căreia, deoarece mașinile pot face doar ceea ce le spunem noi, ele nu pot genera nimic, în timp ce este clar că oamenii sunt capabili de noi concepte și idei (Oppy și Dowe 2021).

În timp au apărut multe alternative la testul Turing, precum Testul Feigenbaum (Feigenbaum și McCorduck 1983), testul propus de Nicholas Negroponte (Brand 1989), etc.

Alan Turing prognoza că testul va fi trecut de un computer până în anul 2000. Dar până în prezent niciun computer nu a reușit încă să treacă de testul Turing.

Concluzie

Dezvoltarea IA implică o suită complexă de instrumente care abordează diverse nevoi, de la programare și manipulare a datelor până la testare și implementare. Pe măsură ce domeniul IA continuă să evolueze, la fel vor evolua și instrumentele care îi susțin avansarea, conducând eficiența și noi capabilități în această disciplină dinamică. Selectarea instrumentelor adecvate este esențială pentru dezvoltarea de soluții IA eficiente, robuste, care să funcționeze la scară și să se adapteze în timp. Această înțelegere nu numai că echipează dezvoltatorii de inteligență artificială cu mijloacele necesare pentru a-și executa proiectele, ci îi dă și puterea să inoveze și să depășească limitele a ceea ce poate realiza inteligența artificială.

Bibliografie

  • Ayer, Alfred J. T. 1936. „Language, Truth and Logic”. Nature 138 (3498): 823–823. https://doi.org/10.1038/138823a0.
  • Brand, Stewart. 1989. The Media Lab: Inventing the Future at MIT. Penguin Books.
  • Ciregan, Dan, Ueli Meier, și Jürgen Schmidhuber. 2012. „Multi-column deep neural networks for image classification”. În 2012 IEEE Conference on Computer Vision and Pattern Recognition, 3642–49. https://doi.org/10.1109/CVPR.2012.6248110.
  • Dennett, Daniel C., F. Dretske, S. Shurville, A. Clark, I. Aleksander, și J. Cornwell. 1994. „The Practical Requirements for Making a Conscious Robot [and Discussion]”. Philosophical Transactions: Physical Sciences and Engineering 349 (1689): 133–46.
  • Descartes, René. 1996. „Discourse on the Method and Meditations on First Philosophy”. Yale University Press (blog). 1996. https://yalebooks.yale.edu/9780300067736/discourse-on-the-method-and-meditations-on-first-philosophy.
  • Domingos, Pedro. 2015. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Penguin Books Limited.
  • Feigenbaum, Edward A., și Pamela McCorduck. 1983. The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World. Addison-Wesley.
  • Habibi Aghdam, Hamed, și Elnaz Jahani Heravi. 2017. „Guide to Convolutional Neural Networks: A Practical Application to Traffic-Sign Detection and Classification”. 2017. https://shop.harvard.com/book/9783319861906.
  • Huang, Ting, și Christopher Smith. 2006. „The History of Artificial Intelligence”. În . https://www.semanticscholar.org/paper/The-History-of-Artificial-Intelligence-Huang-Smith/085599650ebfcfba0dcb434bc50b7c7c54fdbf05.
  • Joint Research Centre (European Commission), S. Samoili, M. López Cobo, E. Gómez, G. De Prato, F. Martínez-Plumed, și B. Delipetrev. 2020. AI Watch: Defining Artificial Intelligence : Towards an Operational Definition and Taxonomy of Artificial Intelligence. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/382730.
  • Luger, George F., și William A. Stubblefield. 1993. Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Benjamin/Cummings Publishing Company.
  • Oppy, Graham, și David Dowe. 2021. „The Turing Test”. În The Stanford Encyclopedia of Philosophy, ediție de Edward N. Zalta, Winter 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2021/entriesuring-test/.
  • Russell, Stuart, și Peter Norvig. 2016. „Artificial Intelligence: A Modern Approach, 4th US ed.” 2016. https://aima.cs.berkeley.edu/.
  • Searle, John R. 1980. „Minds, Brains, and Programs”. Behavioral and Brain Sciences 3 (3): 417–24. https://doi.org/10.1017/S0140525X00005756.
  • Sfetcu, Nicolae. 2023. „Provocări în inteligența artificială”. IT & C. 5 august 2023. https://www.internetmobile.ro/provocari-in-inteligenta-artificiala/.
  • Traiger, Saul. 2000. „Making the Right Identification in the Turing Test1”. Minds and Machines 10 (4): 561–72. https://doi.org/10.1023/A:1011254505902.
  • Turing, A. M. 1950. „Computing Machinery and Intelligence”. Mind LIX (236): 433–60. https://doi.org/10.1093/mind/LIX.236.433.
  • Weizenbaum, Joseph. 1966. „ELIZA—a computer program for the study of natural language communication between man and machine”. Communications of the ACM 9 (1): 36–45. https://doi.org/10.1145/365153.365168.

 

CC BY SA 4.0Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).

Threats of Artificial Intelligence for Cybersecurity

IT & C - Descarcă PDFSfetcu, Nicolae (2024), Threats of Artificial Intelligence for Cybersecurity, IT & C, 3:3, 38-52, DOI: 10.58679/IT17846, https://www.internetmobile.ro/threats-of-artificial-intelligence-for-cybersecurity/

 

Abstract

Artificial intelligence enables automated decision-making and facilitates many aspects of daily life, bringing with it improvements in operations and numerous other benefits. However, AI systems face numerous cybersecurity threats, and AI itself needs to be secured, as cases of malicious attacks have already been reported, e.g. AI techniques and AI-based systems can lead to unexpected results and can be modified to manipulate expected results. Therefore, it is essential to understand the AI threat landscape and to have a common and unifying basis for understanding the potential of threats and consequently to perform specific risk assessments. The latter will support the implementation of targeted and proportionate security measures and controls to counter AI-related threats. This article explores the potential threats posed by AI in cybersecurity and discusses the implications for individuals, organizations, and society at large.

Keywords: artificial intelligence, threats, cyber security, cybersecurity, cyber-attacks, black boxes, algorithmic biases, threat actors, threat taxonomy, threat modeling

Amenințări ale inteligenței artificiale pentru securitatea cibernetică

Rezumat

Inteligența artificială permite luarea automată a deciziilor și facilitează multe aspecte ale vieții de zi cu zi, aducând cu ea îmbunătățiri ale operațiunilor și numeroase alte beneficii. Cu toate acestea, sistemele AI se confruntă cu numeroase amenințări la adresa securității cibernetice, iar AI în sine trebuie să fie securizată, deoarece au fost deja raportate cazuri de atacuri rău intenționate, de ex. tehnicile AI și sistemele bazate pe IA pot duce la rezultate neașteptate și pot fi modificate pentru a manipula rezultatele așteptate. Prin urmare, este esențial să înțelegem peisajul amenințărilor AI și să avem o bază comună și unificatoare pentru înțelegerea potențialului amenințărilor și, în consecință, pentru a efectua evaluări specifice de risc. Acesta din urmă va sprijini punerea în aplicare a unor măsuri și controale de securitate direcționate și proporționale pentru a contracara amenințările legate de IA. Acest articol explorează potențialele amenințări reprezentate de AI în securitatea cibernetică și discută implicațiile pentru indivizi, organizații și societate în general.

Cuvinte cheie: inteligență artificială, amenințări, securitate cibernetică, atacuri cibernetice, cutii negre, părtiniri algoritmice, actori ai amenințărilor, taxonomie a amenințărilor, modelare a amenințărilor

 

IT & C, Volumul 3, Numărul 3, Septembrie 2024, pp. 38-52
ISSN 2821 – 8469, ISSN – L 2821 – 8469, DOI: 10.58679/IT17846
URL: https://www.internetmobile.ro/threats-of-artificial-intelligence-for-cybersecurity/
© 2024 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

Introduction

In the digital age, the rapid advancement of artificial intelligence (AI) has revolutionized various industries, including cybersecurity. However, with this transformative technology comes a double-edged sword. While AI offers unprecedented opportunities to bolster defense mechanisms, it also introduces new threats and vulnerabilities that malicious actors can exploit.

Artificial intelligence enables automated decision-making and facilitates many aspects of daily life, bringing with it improvements in operations and numerous other benefits. However, AI systems face numerous cybersecurity threats, and AI itself needs to be secured, as cases of malicious attacks have already been reported, e.g. AI techniques and AI-based systems can lead to unexpected results and can be modified to manipulate expected results (Ito 2019). Therefore, it is essential to understand the AI threat landscape and to have a common and unifying basis for understanding the potential of threats and consequently to perform specific risk assessments. The latter will support the implementation of targeted and proportionate security measures and controls to counter AI-related threats (ENISA 2020) (Sfetcu 2021).

This article explores the potential threats posed by AI in cybersecurity and discusses the implications for individuals, organizations, and society at large.

Cyber attacks

One of the primary concerns regarding AI in cybersecurity is its potential misuse by cybercriminals. AI-powered attacks have the capability to be more sophisticated and adaptive than traditional methods. For instance, AI algorithms can analyze vast amounts of data to identify patterns and vulnerabilities in systems, enabling attackers to launch highly targeted and effective cyber-attacks. These attacks can range from automated phishing campaigns that craft convincing messages based on deep analysis of social media profiles to AI-driven malware that evolves its behavior to evade detection by traditional security measures.

Moreover, AI can exacerbate existing cybersecurity challenges by creating new avenues for exploitation. As AI algorithms become more advanced, they can be used to manipulate and distort information, such as generating convincing deepfakes or spreading disinformation campaigns. These tactics not only undermine trust in digital communications but also pose significant risks to political stability, financial markets, and societal harmony.

Here are some of the main threats:

Adversarial attacks: AI systems themselves can be targets of adversarial attacks where attackers manipulate input data to cause the AI to make incorrect decisions. This can be particularly dangerous in critical applications like autonomous vehicles, healthcare, and financial services.

Automated attacks: AI can automate the process of identifying vulnerabilities and launching attacks, making cyber-attacks faster and more efficient. AI-driven tools can scan networks, find weaknesses, and exploit them at a speed and scale unattainable by human hackers.

Sophistication of cyber attacks: AI can be used to automate and enhance the sophistication of cyber-attacks. For example, AI can be utilized to develop malware that adapts to the defenses it encounters, making it more difficult to detect and neutralize.

Ransomware: AI can enhance ransomware by improving the encryption methods used, making it more difficult for victims to recover their data without paying the ransom. AI can also help ransomware spread more efficiently across networks.

AI-powered social engineering: By leveraging AI, attackers can create more convincing phishing campaigns and social engineering attacks. AI can analyze vast amounts of data from social media and other sources to craft personalized messages that are more likely to deceive the recipients.

Sophisticated phishing: AI can create highly convincing phishing emails by analyzing the target’s social media profiles and crafting personalized messages. AI-powered chatbots can engage in real-time conversations with potential victims to extract sensitive information.

Exploitation of AI Ssystems: As organizations increasingly rely on AI for their operations, the AI systems themselves become targets. Attackers might seek to compromise these systems to steal data, alter AI behaviors, or use the compromised systems as a launch pad for further attacks.

Automated vulnerability discovery: AI can rapidly identify vulnerabilities in software and hardware that may not be apparent to human researchers. This capability can be used maliciously to discover and exploit zero-day vulnerabilities.

AI in botnets: AI can manage botnets more effectively, optimizing their use for various attacks like distributed denial-of-service (DDoS) attacks. AI can dynamically adjust the behavior of botnets to evade detection and maximize damage.

Evasion techniques: AI can be employed to test cyber defenses and develop methods to evade detection. For instance, AI can automatically generate variations of malware or exploit code that bypass security mechanisms undetected. AI can develop advanced techniques to evade traditional security measures. For example, malware can use AI to learn how to avoid detection by antivirus software and intrusion detection systems. AI can modify malware code in real-time to avoid signature-based detection methods.

Deepfakes and disinformation: AI-generated deepfakes (audio or video clips that convincingly show real people saying or doing things they never did) can be used to manipulate public opinion, create false evidence, or impersonate individuals in malicious activities. AI-generated deepfake videos and audio can be used for malicious purposes, such as creating fake identities or impersonating individuals in video calls. Deepfakes can be used in social engineering attacks to deceive individuals and gain unauthorized access to sensitive information.

Data ăoisoning: Attackers can corrupt training datasets with false information to poison AI models, causing them to behave unpredictably or inaccurately. This can compromise the integrity of AI systems used for security purposes.

Privacy invasion: AI can analyze vast amounts of data to identify patterns and infer sensitive information about individuals without their consent. This can lead to significant privacy violations and unauthorized access to personal data.

Scaling of attacks: AI enables cyber attackers to automate tasks that were previously performed manually, such as identifying vulnerabilities or crafting phishing emails. This automation allows malicious actors to scale their attacks, targeting more systems or individuals at a faster rate.

Accountability and transparency

Another critical concern is the potential for AI to amplify biases in cybersecurity defenses. AI systems learn from historical data, which may reflect biases present in society. If these biases are not properly addressed, AI algorithms could inadvertently perpetuate discrimination or inequities in cybersecurity practices. For example, biased data used to train AI models for threat detection could result in certain demographics or regions being disproportionately targeted or overlooked for protection.

Furthermore, the widespread adoption of AI in cybersecurity introduces challenges related to accountability and transparency. As AI systems autonomously make decisions based on complex algorithms, it can be difficult to trace the logic behind these decisions or to hold anyone accountable for their outcomes. This opacity could hinder efforts to audit, regulate, or improve the security measures implemented by AI systems.

Security of AI training and data: The integrity of AI systems heavily relies on the data used for training. Poisoning attacks, where attackers feed malicious data into the training set, can skew AI decisions, leading to flawed outputs that might be exploited.

Lastly, the rapid pace of AI development and deployment in cybersecurity creates a skills gap. There is a growing demand for cybersecurity professionals who possess expertise in AI and machine learning. Without an adequate workforce trained to understand, monitor, and mitigate AI-driven threats, organizations may struggle to effectively defend against emerging cyber risks.

Algorithmic biases

AI systems can inadvertently perpetuate or even exacerbate existing biases if they’re trained on biased data. This can lead to unfair or unethical outcomes, including discriminatory practices in automated systems that may impact cybersecurity measures and policies.

AI programs can become biased after learning from real-world data. Bias is usually not introduced by system designers, but is learned by the program, and thus programmers are often unaware that the bias exists. Biases can be inadvertently introduced by how the training data is selected. It can also result from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be incorrect. An example of this is COMPAS, a commercial program widely used by US courts to assess the likelihood that a defendant will become a repeat offender. ProPublica argues that the level of recidivism risk assigned by COMPAS to black defendants is more likely to be an overestimate than to white defendants, despite the fact that the program did not tell the defendants’ races. Other examples where algorithmic bias can lead to unfair results are when AI is used for credit assessment or employment. (Pedreschi 2020) (Sfetcu 2021)

Black boxes

The opacity and black-box nature of AI models is increasing, along with the risk of creating systems exposed to biases in the training data, systems that even experts fail to understand. Tools are lacking to allow AI developers to certify the reliability of their models. It is crucial to inject AI technologies with ethical values of fairness (how to avoid unfair and discriminatory decisions), accuracy (how to provide reliable information), confidentiality (how to protect the privacy of the people involved) and transparency (how to models and decisions understandable to all interested parties). This value-sensitive design approach aims to increase the widespread social acceptance of AI without inhibiting its power. (Pedreschi 2020) (Sfetcu 2021)

Compass, owned by Northpointe Inc., is a predictive model of the risk of criminal recidivism that was used until recently by various US courts to support judges’ decisions on parole requests. Journalists from Propublica.org collected thousands of use cases of the model and showed that it has a strong racist bias: blacks who will not commit a crime again will receive double the risk compared to whites under the same conditions (Mattu 2020). The model, developed with machine learning techniques, likely inherited the bias present in historical sentencing and is affected by the fact that the US prison population is far more black than white  (Sfetcu 2021).

The top three credit reporting agencies in the United States, Experian, TransUnion, and Equifax, are often at odds. In a study of 500,000 cases, 29% of credit applicants had a risk assessment with differences of more than 50 points between the three companies, which can mean differences of tens of thousands of dollars in overall interests. Such wide variability suggests very different as well as opaque valuation assumptions, or strong arbitrariness (Carter and Auken 2006).

In the 1970s and 1980s, the School of Medicine of St. George in London used software to filter job applications, which was later found to be highly discriminatory against women and ethnic minorities, inferred by first name and place of birth. Algorithmic discrimination is not a new phenomenon and is not necessarily due to machine learning (Lowry and Macpherson 1988).

A classifier based on deep learning can be very accurate on the training data and at the same time completely unreliable, for example, if it learned from poor quality data. In a case of image recognition aimed at distinguishing husky wolves in a large data set, the resulting black box was dissected by researchers only to find that the decision to classify an image as „wolf” was based solely on the snow in background (Ribeiro, Singh, and Guestrin 2016)! The fault, of course, is not deep learning, but the accidental choice of training examples where every wolf had obviously been photographed in the snow. So, a husky in the snow is automatically classified as a wolf. Translating this example to the vision system of our self-driving car: how can we be sure that it will be able to correctly recognize every object around us? (Pedreschi 2020) (Sfetcu 2021)

Ai Black Box

The problems of opening the black box. Source: (Guidotti et al. 2018)

Various studies, such as the one mentioned in (Caliskan, Bryson, and Narayanan 2017), show that texts on the web (but also in the media in general) contain biases and prejudices, such as the fact that the names of white people are more often associated with words with a positive emotional charge, while the names of black people are more often associated with words with a negative emotional charge. Therefore, models trained on texts for sentiment and opinion analysis are highly likely to inherit the same biases (Pedreschi 2020) (Sfetcu 2021).

Bloomberg data journalists (Ingold and Soper 2016) have shown how the automated model used by Amazon to select neighborhoods in US cities to offer free “same-day delivery” has an ethnic bias. The software, without the company’s knowledge, systematically excludes areas inhabited by ethnic minorities in many cities, including nearby ones. Amazon responded to the journalist’s inquiry that it was not aware of this practice because the machine learning model was completely autonomous and based its choices on previous customer activity. In short, it’s the algorithm’s fault (Pedreschi 2020) (Sfetcu 2021).

„The right to explanation”

Through machine learning (ML) and deep learning (DL) we create systems that we do not yet fully know. The European legislator has become aware of this pitfall and perhaps the most innovative and forward-looking of the General Data Protection Regulation (GDPR), the new privacy regulation that entered into force in Europe on May 25, 2018, is precisely the right to explanation or the right to obtain meaningful information about the logic adopted by any automated decision-making system that has legal effects, or „similarly relevant”, to the persons involved. Without technology capable of explaining the logic of black boxes, however, the right to explanation is destined to remain a dead letter or prohibit many applications of opaque ML. It is not only about digital ethics, avoiding discrimination and injustice, but also about security and corporate responsibility. (ENISA 2020) (Sfetcu 2021)

In areas such as cars, robotic assistants, IoT systems for home automation and manufacturing, personalized precision medicine, companies are launching services and products with AI components that could inadvertently incorporate erroneous, safety-critical decisions, learned from errors, or through false correlations in the learning data. For example, how to recognize an object in a photo by properties not of the object itself, but of the properties of the background, due to a systematic bias in the collection of learning examples. How can companies trust their products without understanding and validating how they work? Explainable AI technology is critical to creating products with reliable AI components, protecting consumer safety, and limiting industrial liability. (ENISA 2020) (Sfetcu 2021).

Consequently, the scientific use of ML, as in medicine, biology, economics or the social sciences, requires an understanding not only for confidence in the results, but also for the open nature of scientific research so that it can be shared and progressed. The challenge is tough and stimulating: an explanation must not only be correct and comprehensive, but also comprehensible to a multitude of subjects with different needs and skills, from the user making the decision, to developers of AI solutions, to researchers, to data scientists, policy makers, supervisory authorities, civil rights associations, journalists. (ENISA 2020) (Sfetcu 2021).

What an „explanation” is, was already investigated by Aristotle in his Physics, a treatise dating from the 4th century BCE. Today there is an urgent need to make functional sense, as an interface between humans and algorithms that suggest decisions, or that decide directly, so that AI serves to augment human capabilities, not to replace them. In general, explanatory approaches differ for the different types of data from which we want to learn a model. For example, for tabular data, explanatory methods try to identify which variables contribute to a specific decision or prediction in the form of if-then-else rules, or decision trees. (ENISA 2020) (Sfetcu 2021).

In the past two years there has been an impetuous research effort on intelligible artificial intelligence, but technology for practical and systematically applicable explanation has not yet emerged. There are two broad ways to solve the problem (ENISA 2020) (Sfetcu 2021).

  • Explanation by design: (XbD). Given a decision data set, how to build a „transparent automated decision maker” that provides easy-to-understand suggestions.
  • Explanation of black boxes: (Bbx). Given a set of decisions produced by an „opaque automatic decision maker”, how to reconstruct an explanation for each decision.

Today we have encouraging results that allow us to piece together individual explanations, answers to questions like “Why wasn’t I chosen for the position I applied for? What would I have to change to change the decision?”  (Guidotti et al. 2018)

The first distinction concerns XbD and Bbx. The latter can be further divided between model explanation, when the goal of the explanation is the entire logic of the dark model, Result explanation, when the goal is to explain decisions about a particular case, and model inspection, when the goal is to understand the properties of the model in general dark.

We are rapidly evolving from a time when humans coded algorithms, taking responsibility for the correctness and quality of the software produced and the choices represented in it, to a time when machines independently deduce algorithms based on a sufficient number of examples of expected input/output behavior. In this disruptive scenario, the idea of AI black boxes that are open and easy to understand is functional not only to verify their correctness and quality, but especially to align algorithms with human values and expectations and preserve or possibly to expand, the autonomy and awareness of our decisions (ENISA 2020) (Sfetcu 2021).

Threat actors

There are different groups of threat actors who may want to attack AI systems using cyber means.

Cybercriminals are primarily motivated by profit. Cybercriminals will tend to use AI as a tool to conduct attacks, but also to exploit vulnerabilities in existing AI systems. For example, they could try to hack AI-enabled chatbots to steal credit cards or other data. Alternatively, they can launch a ransomware attack against AI-based systems used to manage supply chain and warehousing. (ENISA 2020) (Sfetcu 2021).

Company personnel, including employees and contractors who have access to an organization’s networks, can engage either those who have malicious intent or those who may harm the company unintentionally. For example, malicious insiders could try to steal or sabotage the dataset used by the company’s AI systems. Conversely, non-malicious individuals may accidentally corrupt such a data set.

Nation-state actors and other state-sponsored attackers are generally advanced. In addition to developing ways to use AI systems to attack other countries (including critical industries and infrastructure), as well as using AI systems to defend their own networks, nation-state actors are actively looking for vulnerabilities in AI systems that they can exploit. This could be as a means of causing harm to another country or as a means of gathering information.

Other threat actors include terrorists, who seek to cause physical harm or even loss of life. For example, terrorists may want to hack into driverless cars to use as a weapon (ENISA 2020) (Sfetcu 2021).

Hacktivists, who tend to be mostly ideologically motivated, may also try to hack AI systems to show that it can be done. There are a growing number of groups concerned about the potential dangers of AI, and it is not inconceivable that they could hack an AI system to gain publicity. There are also unsophisticated threat actors, such as amateur hackers (haxors), who may be criminally or ideologically motivated. These are generally unskilled people who use pre-written scripts or programs to attack systems because they lack the expertise to write their own. Beyond the traditional threat actors discussed above, it is becoming increasingly necessary to include competitors as threat actors as some companies increasingly intend to attack their rivals to gain market share (Sailio, Latvala, and Szanto 2020).

Taxonomy of threats

Taxonomy of artificial intelligence threats

Taxonomy of artificial intelligence threats. Source: (ENISA 2020)

The list below presents a high-level threat classification list based on the ENISA threat taxonomy (ENISA 2016), which was used to map the AI threat landscape (ENISA 2020) (Sfetcu 2021):

  • Nefarious activity/abuse (NAA): “intended actions that target ICT systems, infrastructure, and networks by means of malicious acts with the aim to either steal, alter, or destroy a specified target”.
  • Eavesdropping/Interception/ Hijacking (EIH): “actions aiming to listen, interrupt, or seize control of a third party communication without consent”.
  • Physical attacks (PA): “actions which aim to destroy, expose, alter, disable, steal or gain unauthorised access to physical assets such as infrastructure, hardware, or interconnection”.
  • Unintentional Damage (UD): unintentional actions causing “destruction, harm, or injury of property or persons and results in a failure or reduction in usefulness”.
  • Failures or malfunctions (FM): “Partial or full insufficient functioning of an asset (hardware or software)”.
  • Outages (OUT): “unexpected disruptions of service or decrease in quality falling below a required level“.
  • Disaster (DIS): “a sudden accident or a natural catastrophe that causes great damage or loss of life”.
  • Legal (LEG): “legal actions of third parties (contracting or otherwise), in order to prohibit actions or compensate for loss based on applicable law”. (ENISA 2020)

Threat modeling methodologies

Threat modeling involves the process of identifying threats and eventually listing and prioritizing them (Shostack 2014). There are various methodologies on how to perform threat modeling, STRIDE (Microsoft 2009) being one of the most prominent. In the context of future risk/treatment assessments for artificial intelligence (AI) for specific use cases, the threat modeling methodology may involve 5 steps, namely (ENISA 2020) (Sfetcu 2021):

  1. Identifying the objectives: Identify the security properties that the system should have.
  2. Study: Map the system, its components and their interactions and interdependencies with external systems.
  3. Asset identification: Identify security-critical assets that need protection.
  4. Threat identification: Identify the threats to the assets that will result in failure to meet the above stated objectives.
  5. Vulnerability identification: determine – usually based on existing attacks – whether the system is vulnerable to identified threats[1].

To develop the AI threat landscape, we can consider both traditional security properties and security properties that are more relevant to the AI domain. The former include confidentiality, integrity and availability with additional security properties including authenticity, authorization and non-repudiation, while the latter are more specific to AI and include robustness, trust, safety, transparency, explainability, accountability as well as data protection[2].

The impact of threats on confidentiality, integrity and availability is presented, and accordingly, based on the impact on these fundamental security properties, the impact of threats on additional security properties is mapped as follows (ENISA 2020) (Sfetcu 2021):

  • Authenticity can be affected when integrity is compromised because the authenticity of data or results could be affected.
  • Authorization may be affected when confidentiality and integrity are affected, given that the legitimacy of the operation may be affected.
  • Non-repudiation can be affected when integrity is affected.
  • The robustness of an AI system/application can be affected when availability and integrity are affected.
  • Trust in an AI system/application may be affected when integrity, confidentiality, and availability are compromised, as the AI system/application may be operating with corrupt or underperforming data.
  • Safety can be affected when integrity or availability are affected, as these properties can have a negative impact on the proper functioning of an AI system/application.
  • Transparency can be affected when confidentiality, integrity, or availability are affected because it prevents disclosure of why and how an AI system/application behaved as it did.
  • Explainability can be affected when confidentiality, integrity, or availability are affected because it prevents the inference of adequate explanations as to why an AI system/application behaved as it did.
  • Accountability can be affected when integrity is affected because it prevents the distribution of verified shares to owners.
  • Protection of personal data may be affected when confidentiality, integrity or availability are affected. For example, breach of confidentiality (for example, achieved by combining different data sets for the same person) may result in disclosure of personal data to unauthorized recipients. Violations of integrity (e.g., poor data quality or ‘biased’ input data sets) can lead to automated decision-making systems that misclassify people and exclude them from certain services or deprive them of their rights. Availability issues can disrupt access to personal data in important AI-based services. Transparency and explainability can directly affect the protection of personal data, while accountability is also an inherent aspect of personal data protection. In general, AI systems and applications can significantly limit human control over personal data, thereby leading to conclusions about individuals that directly impact their rights and freedoms. This can happen either because the results of the machine deviate from the results expected by the individuals, or because they do not meet the expectations of the individuals.

After the security properties have been introduced and based on the AI lifecycle reference model introduced and the assets identified, the next step in the considered methodology involves the identification of threats and vulnerabilities. To identify the threats, each asset is considered individually and as a group and the relevant failure modes are highlighted (Asllani, Lari, and Lari 2018) in terms of the security properties mentioned above. By identifying threats to assets, we are able to map the threat landscape in AI systems. Furthermore, the effects of identifying the threat to the vulnerabilities of AI systems are also emphasized by referring to specific manifestations of the attacks. This would lead to the introduction of proportionate security measures and controls in the future (ENISA 2020) (Sfetcu 2021).

Conclusion

In conclusion, while AI holds immense promise for enhancing cybersecurity defenses, it also introduces new and complex challenges. From sophisticated AI-driven cyber-attacks to biases in AI algorithms and issues of accountability, the threats posed by AI in cybersecurity require careful consideration and proactive measures. Addressing these challenges necessitates collaboration between policymakers, technologists, and cybersecurity experts to develop ethical frameworks, enhance regulatory oversight, and promote responsible AI innovation. Only through concerted efforts can we harness the full potential of AI while safeguarding our digital infrastructures and societal well-being.

Addressing these threats requires advanced cybersecurity measures, including the development of AI-driven defense systems that can anticipate and counter AI-based attacks. A robust cybersecurity strategy needs to include AI-specific considerations, such as securing AI data sets and algorithms, monitoring AI systems for malicious activity, and understanding the potential use of AI by adversaries. Collaboration between cybersecurity experts, AI researchers, and policymakers is crucial to creating robust frameworks that ensure the safe and ethical use of AI technologies.

Bibliography

  • Asllani, Arben, Alireza Lari, and Nasim Lari. 2018. “Strengthening Information Technology Security through the Failure Modes and Effects Analysis Approach.” International Journal of Quality Innovation 4 (1): 5. https://doi.org/10.1186/s40887-018-0025-1.
  • Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science 356 (6334): 183–86. https://doi.org/10.1126/science.aal4230.
  • Carter, Richard, and Howard Auken. 2006. “Small Firm Bankruptcy.” Journal of Small Business Management 44 (October):493–512. https://doi.org/10.1111/j.1540-627X.2006.00187.x.
  • ENISA. 2016. “Threat Taxonomy.” File. ENISA. 2016. https://www.enisa.europa.eu/topics/cyber-threats/threats-and-trends/enisa-threat-landscape/threat-taxonomy/view.
  • ———. 2020. “Artificial Intelligence Cybersecurity Challenges.” Report/Study. ENISA. 2020. https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges.
  • Guidotti, Riccardo, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. “A Survey Of Methods For Explaining Black Box Models.” arXiv. https://doi.org/10.48550/arXiv.1802.01933.
  • Ingold, David, and Spencer Soper. 2016. “Amazon Doesn’t Consider the Race of Its Customers. Should It?” Bloomberg.Com. 2016. http://www.bloomberg.com/graphics/2016-amazon-same-day/.
  • Ito, Joi. 2019. “Adversarial Attacks on Medical Machine Learning.” MIT Media Lab. 2019. https://www.media.mit.edu/publications/adversarial-attacks-on-medical-machine-learning/.
  • Lowry, Stella, and Gordon Macpherson. 1988. “A Blot on the Profession.” British Medical Journal (Clinical Research Ed.) 296 (6623): 657–58.
  • Mattu, Julia Angwin, Jeff Larson,Lauren Kirchner,Surya. 2020. “Machine Bias.” ProPublica. 2020. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  • Microsoft. 2009. “The STRIDE Threat Model.” November 12, 2009. https://learn.microsoft.com/en-us/previous-versions/commerce-server/ee823878(v=cs.20).
  • Pedreschi, D. 2020. “Artificial Intelligence (AI): New Developments and Innovations Applied to e-Commerce | Think Tank | European Parliament.” 2020. https://www.europarl.europa.eu/thinktank/en/document/IPOL_IDA(2020)648791.
  • Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” arXiv. https://doi.org/10.48550/arXiv.1602.04938.
  • Sailio, Mirko, Outi-Marja Latvala, and Alexander Szanto. 2020. “Cyber Threat Actors for the Factory of the Future.” Applied Sciences 10 (12): 4334. https://doi.org/10.3390/app10124334.
  • Sfetcu, Nicolae. 2021. Introducere în inteligența artificială. Nicolae Sfetcu. https://www.telework.ro/ro/e-books/introducere-in-inteligenta-artificiala/.
  • Shostack, Adam. 2014. “Threat Modeling: Designing for Security | Wiley.” Wiley.Com. 2014. https://www.wiley.com/en-us/Threat+Modeling%3A+Designing+for+Security-p-9781118809990.

Notes

[1] Vulnerability identification has not been extensively explored here, as specific use cases must be considered to perform this step.

[2] The AI-specific security properties were based on the HLEG EC IA work on the assessment list for trustworthy AI: https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-intelligencia -artificial-altai-self-evaluation

 

CC BY SA 4.0Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).

1 2 3 4 5 29