Sfetcu, Nicolae (2024), Threats of Artificial Intelligence for Cybersecurity, IT & C, 3:3, 38-52, DOI: 10.58679/IT17846, https://www.internetmobile.ro/threats-of-artificial-intelligence-for-cybersecurity/
Abstract
Artificial intelligence enables automated decision-making and facilitates many aspects of daily life, bringing with it improvements in operations and numerous other benefits. However, AI systems face numerous cybersecurity threats, and AI itself needs to be secured, as cases of malicious attacks have already been reported, e.g. AI techniques and AI-based systems can lead to unexpected results and can be modified to manipulate expected results. Therefore, it is essential to understand the AI threat landscape and to have a common and unifying basis for understanding the potential of threats and consequently to perform specific risk assessments. The latter will support the implementation of targeted and proportionate security measures and controls to counter AI-related threats. This article explores the potential threats posed by AI in cybersecurity and discusses the implications for individuals, organizations, and society at large.
Keywords: artificial intelligence, threats, cyber security, cybersecurity, cyber-attacks, black boxes, algorithmic biases, threat actors, threat taxonomy, threat modeling
Amenințări ale inteligenței artificiale pentru securitatea cibernetică
Rezumat
Inteligența artificială permite luarea automată a deciziilor și facilitează multe aspecte ale vieții de zi cu zi, aducând cu ea îmbunătățiri ale operațiunilor și numeroase alte beneficii. Cu toate acestea, sistemele AI se confruntă cu numeroase amenințări la adresa securității cibernetice, iar AI în sine trebuie să fie securizată, deoarece au fost deja raportate cazuri de atacuri rău intenționate, de ex. tehnicile AI și sistemele bazate pe IA pot duce la rezultate neașteptate și pot fi modificate pentru a manipula rezultatele așteptate. Prin urmare, este esențial să înțelegem peisajul amenințărilor AI și să avem o bază comună și unificatoare pentru înțelegerea potențialului amenințărilor și, în consecință, pentru a efectua evaluări specifice de risc. Acesta din urmă va sprijini punerea în aplicare a unor măsuri și controale de securitate direcționate și proporționale pentru a contracara amenințările legate de IA. Acest articol explorează potențialele amenințări reprezentate de AI în securitatea cibernetică și discută implicațiile pentru indivizi, organizații și societate în general.
Cuvinte cheie: inteligență artificială, amenințări, securitate cibernetică, atacuri cibernetice, cutii negre, părtiniri algoritmice, actori ai amenințărilor, taxonomie a amenințărilor, modelare a amenințărilor
IT & C, Volumul 3, Numărul 3, Septembrie 2024, pp. 38-52
ISSN 2821 – 8469, ISSN – L 2821 – 8469, DOI: 10.58679/IT17846
URL: https://www.internetmobile.ro/threats-of-artificial-intelligence-for-cybersecurity/
© 2024 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.
Introduction
In the digital age, the rapid advancement of artificial intelligence (AI) has revolutionized various industries, including cybersecurity. However, with this transformative technology comes a double-edged sword. While AI offers unprecedented opportunities to bolster defense mechanisms, it also introduces new threats and vulnerabilities that malicious actors can exploit.
Artificial intelligence enables automated decision-making and facilitates many aspects of daily life, bringing with it improvements in operations and numerous other benefits. However, AI systems face numerous cybersecurity threats, and AI itself needs to be secured, as cases of malicious attacks have already been reported, e.g. AI techniques and AI-based systems can lead to unexpected results and can be modified to manipulate expected results (Ito 2019). Therefore, it is essential to understand the AI threat landscape and to have a common and unifying basis for understanding the potential of threats and consequently to perform specific risk assessments. The latter will support the implementation of targeted and proportionate security measures and controls to counter AI-related threats (ENISA 2020) (Sfetcu 2021).
This article explores the potential threats posed by AI in cybersecurity and discusses the implications for individuals, organizations, and society at large.
Cyber attacks
One of the primary concerns regarding AI in cybersecurity is its potential misuse by cybercriminals. AI-powered attacks have the capability to be more sophisticated and adaptive than traditional methods. For instance, AI algorithms can analyze vast amounts of data to identify patterns and vulnerabilities in systems, enabling attackers to launch highly targeted and effective cyber-attacks. These attacks can range from automated phishing campaigns that craft convincing messages based on deep analysis of social media profiles to AI-driven malware that evolves its behavior to evade detection by traditional security measures.
Moreover, AI can exacerbate existing cybersecurity challenges by creating new avenues for exploitation. As AI algorithms become more advanced, they can be used to manipulate and distort information, such as generating convincing deepfakes or spreading disinformation campaigns. These tactics not only undermine trust in digital communications but also pose significant risks to political stability, financial markets, and societal harmony.
Here are some of the main threats:
Adversarial attacks: AI systems themselves can be targets of adversarial attacks where attackers manipulate input data to cause the AI to make incorrect decisions. This can be particularly dangerous in critical applications like autonomous vehicles, healthcare, and financial services.
Automated attacks: AI can automate the process of identifying vulnerabilities and launching attacks, making cyber-attacks faster and more efficient. AI-driven tools can scan networks, find weaknesses, and exploit them at a speed and scale unattainable by human hackers.
Sophistication of cyber attacks: AI can be used to automate and enhance the sophistication of cyber-attacks. For example, AI can be utilized to develop malware that adapts to the defenses it encounters, making it more difficult to detect and neutralize.
Ransomware: AI can enhance ransomware by improving the encryption methods used, making it more difficult for victims to recover their data without paying the ransom. AI can also help ransomware spread more efficiently across networks.
AI-powered social engineering: By leveraging AI, attackers can create more convincing phishing campaigns and social engineering attacks. AI can analyze vast amounts of data from social media and other sources to craft personalized messages that are more likely to deceive the recipients.
Sophisticated phishing: AI can create highly convincing phishing emails by analyzing the target’s social media profiles and crafting personalized messages. AI-powered chatbots can engage in real-time conversations with potential victims to extract sensitive information.
Exploitation of AI Ssystems: As organizations increasingly rely on AI for their operations, the AI systems themselves become targets. Attackers might seek to compromise these systems to steal data, alter AI behaviors, or use the compromised systems as a launch pad for further attacks.
Automated vulnerability discovery: AI can rapidly identify vulnerabilities in software and hardware that may not be apparent to human researchers. This capability can be used maliciously to discover and exploit zero-day vulnerabilities.
AI in botnets: AI can manage botnets more effectively, optimizing their use for various attacks like distributed denial-of-service (DDoS) attacks. AI can dynamically adjust the behavior of botnets to evade detection and maximize damage.
Evasion techniques: AI can be employed to test cyber defenses and develop methods to evade detection. For instance, AI can automatically generate variations of malware or exploit code that bypass security mechanisms undetected. AI can develop advanced techniques to evade traditional security measures. For example, malware can use AI to learn how to avoid detection by antivirus software and intrusion detection systems. AI can modify malware code in real-time to avoid signature-based detection methods.
Deepfakes and disinformation: AI-generated deepfakes (audio or video clips that convincingly show real people saying or doing things they never did) can be used to manipulate public opinion, create false evidence, or impersonate individuals in malicious activities. AI-generated deepfake videos and audio can be used for malicious purposes, such as creating fake identities or impersonating individuals in video calls. Deepfakes can be used in social engineering attacks to deceive individuals and gain unauthorized access to sensitive information.
Data ăoisoning: Attackers can corrupt training datasets with false information to poison AI models, causing them to behave unpredictably or inaccurately. This can compromise the integrity of AI systems used for security purposes.
Privacy invasion: AI can analyze vast amounts of data to identify patterns and infer sensitive information about individuals without their consent. This can lead to significant privacy violations and unauthorized access to personal data.
Scaling of attacks: AI enables cyber attackers to automate tasks that were previously performed manually, such as identifying vulnerabilities or crafting phishing emails. This automation allows malicious actors to scale their attacks, targeting more systems or individuals at a faster rate.
Accountability and transparency
Another critical concern is the potential for AI to amplify biases in cybersecurity defenses. AI systems learn from historical data, which may reflect biases present in society. If these biases are not properly addressed, AI algorithms could inadvertently perpetuate discrimination or inequities in cybersecurity practices. For example, biased data used to train AI models for threat detection could result in certain demographics or regions being disproportionately targeted or overlooked for protection.
Furthermore, the widespread adoption of AI in cybersecurity introduces challenges related to accountability and transparency. As AI systems autonomously make decisions based on complex algorithms, it can be difficult to trace the logic behind these decisions or to hold anyone accountable for their outcomes. This opacity could hinder efforts to audit, regulate, or improve the security measures implemented by AI systems.
Security of AI training and data: The integrity of AI systems heavily relies on the data used for training. Poisoning attacks, where attackers feed malicious data into the training set, can skew AI decisions, leading to flawed outputs that might be exploited.
Lastly, the rapid pace of AI development and deployment in cybersecurity creates a skills gap. There is a growing demand for cybersecurity professionals who possess expertise in AI and machine learning. Without an adequate workforce trained to understand, monitor, and mitigate AI-driven threats, organizations may struggle to effectively defend against emerging cyber risks.
Algorithmic biases
AI systems can inadvertently perpetuate or even exacerbate existing biases if they’re trained on biased data. This can lead to unfair or unethical outcomes, including discriminatory practices in automated systems that may impact cybersecurity measures and policies.
AI programs can become biased after learning from real-world data. Bias is usually not introduced by system designers, but is learned by the program, and thus programmers are often unaware that the bias exists. Biases can be inadvertently introduced by how the training data is selected. It can also result from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be incorrect. An example of this is COMPAS, a commercial program widely used by US courts to assess the likelihood that a defendant will become a repeat offender. ProPublica argues that the level of recidivism risk assigned by COMPAS to black defendants is more likely to be an overestimate than to white defendants, despite the fact that the program did not tell the defendants’ races. Other examples where algorithmic bias can lead to unfair results are when AI is used for credit assessment or employment. (Pedreschi 2020) (Sfetcu 2021)
Black boxes
The opacity and black-box nature of AI models is increasing, along with the risk of creating systems exposed to biases in the training data, systems that even experts fail to understand. Tools are lacking to allow AI developers to certify the reliability of their models. It is crucial to inject AI technologies with ethical values of fairness (how to avoid unfair and discriminatory decisions), accuracy (how to provide reliable information), confidentiality (how to protect the privacy of the people involved) and transparency (how to models and decisions understandable to all interested parties). This value-sensitive design approach aims to increase the widespread social acceptance of AI without inhibiting its power. (Pedreschi 2020) (Sfetcu 2021)
Compass, owned by Northpointe Inc., is a predictive model of the risk of criminal recidivism that was used until recently by various US courts to support judges’ decisions on parole requests. Journalists from Propublica.org collected thousands of use cases of the model and showed that it has a strong racist bias: blacks who will not commit a crime again will receive double the risk compared to whites under the same conditions (Mattu 2020). The model, developed with machine learning techniques, likely inherited the bias present in historical sentencing and is affected by the fact that the US prison population is far more black than white (Sfetcu 2021).
The top three credit reporting agencies in the United States, Experian, TransUnion, and Equifax, are often at odds. In a study of 500,000 cases, 29% of credit applicants had a risk assessment with differences of more than 50 points between the three companies, which can mean differences of tens of thousands of dollars in overall interests. Such wide variability suggests very different as well as opaque valuation assumptions, or strong arbitrariness (Carter and Auken 2006).
In the 1970s and 1980s, the School of Medicine of St. George in London used software to filter job applications, which was later found to be highly discriminatory against women and ethnic minorities, inferred by first name and place of birth. Algorithmic discrimination is not a new phenomenon and is not necessarily due to machine learning (Lowry and Macpherson 1988).
A classifier based on deep learning can be very accurate on the training data and at the same time completely unreliable, for example, if it learned from poor quality data. In a case of image recognition aimed at distinguishing husky wolves in a large data set, the resulting black box was dissected by researchers only to find that the decision to classify an image as „wolf” was based solely on the snow in background (Ribeiro, Singh, and Guestrin 2016)! The fault, of course, is not deep learning, but the accidental choice of training examples where every wolf had obviously been photographed in the snow. So, a husky in the snow is automatically classified as a wolf. Translating this example to the vision system of our self-driving car: how can we be sure that it will be able to correctly recognize every object around us? (Pedreschi 2020) (Sfetcu 2021)
The problems of opening the black box. Source: (Guidotti et al. 2018)
Various studies, such as the one mentioned in (Caliskan, Bryson, and Narayanan 2017), show that texts on the web (but also in the media in general) contain biases and prejudices, such as the fact that the names of white people are more often associated with words with a positive emotional charge, while the names of black people are more often associated with words with a negative emotional charge. Therefore, models trained on texts for sentiment and opinion analysis are highly likely to inherit the same biases (Pedreschi 2020) (Sfetcu 2021).
Bloomberg data journalists (Ingold and Soper 2016) have shown how the automated model used by Amazon to select neighborhoods in US cities to offer free “same-day delivery” has an ethnic bias. The software, without the company’s knowledge, systematically excludes areas inhabited by ethnic minorities in many cities, including nearby ones. Amazon responded to the journalist’s inquiry that it was not aware of this practice because the machine learning model was completely autonomous and based its choices on previous customer activity. In short, it’s the algorithm’s fault (Pedreschi 2020) (Sfetcu 2021).
„The right to explanation”
Through machine learning (ML) and deep learning (DL) we create systems that we do not yet fully know. The European legislator has become aware of this pitfall and perhaps the most innovative and forward-looking of the General Data Protection Regulation (GDPR), the new privacy regulation that entered into force in Europe on May 25, 2018, is precisely the right to explanation or the right to obtain meaningful information about the logic adopted by any automated decision-making system that has legal effects, or „similarly relevant”, to the persons involved. Without technology capable of explaining the logic of black boxes, however, the right to explanation is destined to remain a dead letter or prohibit many applications of opaque ML. It is not only about digital ethics, avoiding discrimination and injustice, but also about security and corporate responsibility. (ENISA 2020) (Sfetcu 2021)
In areas such as cars, robotic assistants, IoT systems for home automation and manufacturing, personalized precision medicine, companies are launching services and products with AI components that could inadvertently incorporate erroneous, safety-critical decisions, learned from errors, or through false correlations in the learning data. For example, how to recognize an object in a photo by properties not of the object itself, but of the properties of the background, due to a systematic bias in the collection of learning examples. How can companies trust their products without understanding and validating how they work? Explainable AI technology is critical to creating products with reliable AI components, protecting consumer safety, and limiting industrial liability. (ENISA 2020) (Sfetcu 2021).
Consequently, the scientific use of ML, as in medicine, biology, economics or the social sciences, requires an understanding not only for confidence in the results, but also for the open nature of scientific research so that it can be shared and progressed. The challenge is tough and stimulating: an explanation must not only be correct and comprehensive, but also comprehensible to a multitude of subjects with different needs and skills, from the user making the decision, to developers of AI solutions, to researchers, to data scientists, policy makers, supervisory authorities, civil rights associations, journalists. (ENISA 2020) (Sfetcu 2021).
What an „explanation” is, was already investigated by Aristotle in his Physics, a treatise dating from the 4th century BCE. Today there is an urgent need to make functional sense, as an interface between humans and algorithms that suggest decisions, or that decide directly, so that AI serves to augment human capabilities, not to replace them. In general, explanatory approaches differ for the different types of data from which we want to learn a model. For example, for tabular data, explanatory methods try to identify which variables contribute to a specific decision or prediction in the form of if-then-else rules, or decision trees. (ENISA 2020) (Sfetcu 2021).
In the past two years there has been an impetuous research effort on intelligible artificial intelligence, but technology for practical and systematically applicable explanation has not yet emerged. There are two broad ways to solve the problem (ENISA 2020) (Sfetcu 2021).
- Explanation by design: (XbD). Given a decision data set, how to build a „transparent automated decision maker” that provides easy-to-understand suggestions.
- Explanation of black boxes: (Bbx). Given a set of decisions produced by an „opaque automatic decision maker”, how to reconstruct an explanation for each decision.
Today we have encouraging results that allow us to piece together individual explanations, answers to questions like “Why wasn’t I chosen for the position I applied for? What would I have to change to change the decision?” (Guidotti et al. 2018)
The first distinction concerns XbD and Bbx. The latter can be further divided between model explanation, when the goal of the explanation is the entire logic of the dark model, Result explanation, when the goal is to explain decisions about a particular case, and model inspection, when the goal is to understand the properties of the model in general dark.
We are rapidly evolving from a time when humans coded algorithms, taking responsibility for the correctness and quality of the software produced and the choices represented in it, to a time when machines independently deduce algorithms based on a sufficient number of examples of expected input/output behavior. In this disruptive scenario, the idea of AI black boxes that are open and easy to understand is functional not only to verify their correctness and quality, but especially to align algorithms with human values and expectations and preserve or possibly to expand, the autonomy and awareness of our decisions (ENISA 2020) (Sfetcu 2021).
Threat actors
There are different groups of threat actors who may want to attack AI systems using cyber means.
Cybercriminals are primarily motivated by profit. Cybercriminals will tend to use AI as a tool to conduct attacks, but also to exploit vulnerabilities in existing AI systems. For example, they could try to hack AI-enabled chatbots to steal credit cards or other data. Alternatively, they can launch a ransomware attack against AI-based systems used to manage supply chain and warehousing. (ENISA 2020) (Sfetcu 2021).
Company personnel, including employees and contractors who have access to an organization’s networks, can engage either those who have malicious intent or those who may harm the company unintentionally. For example, malicious insiders could try to steal or sabotage the dataset used by the company’s AI systems. Conversely, non-malicious individuals may accidentally corrupt such a data set.
Nation-state actors and other state-sponsored attackers are generally advanced. In addition to developing ways to use AI systems to attack other countries (including critical industries and infrastructure), as well as using AI systems to defend their own networks, nation-state actors are actively looking for vulnerabilities in AI systems that they can exploit. This could be as a means of causing harm to another country or as a means of gathering information.
Other threat actors include terrorists, who seek to cause physical harm or even loss of life. For example, terrorists may want to hack into driverless cars to use as a weapon (ENISA 2020) (Sfetcu 2021).
Hacktivists, who tend to be mostly ideologically motivated, may also try to hack AI systems to show that it can be done. There are a growing number of groups concerned about the potential dangers of AI, and it is not inconceivable that they could hack an AI system to gain publicity. There are also unsophisticated threat actors, such as amateur hackers (haxors), who may be criminally or ideologically motivated. These are generally unskilled people who use pre-written scripts or programs to attack systems because they lack the expertise to write their own. Beyond the traditional threat actors discussed above, it is becoming increasingly necessary to include competitors as threat actors as some companies increasingly intend to attack their rivals to gain market share (Sailio, Latvala, and Szanto 2020).
Taxonomy of threats
Taxonomy of artificial intelligence threats. Source: (ENISA 2020)
The list below presents a high-level threat classification list based on the ENISA threat taxonomy (ENISA 2016), which was used to map the AI threat landscape (ENISA 2020) (Sfetcu 2021):
- Nefarious activity/abuse (NAA): “intended actions that target ICT systems, infrastructure, and networks by means of malicious acts with the aim to either steal, alter, or destroy a specified target”.
- Eavesdropping/Interception/ Hijacking (EIH): “actions aiming to listen, interrupt, or seize control of a third party communication without consent”.
- Physical attacks (PA): “actions which aim to destroy, expose, alter, disable, steal or gain unauthorised access to physical assets such as infrastructure, hardware, or interconnection”.
- Unintentional Damage (UD): unintentional actions causing “destruction, harm, or injury of property or persons and results in a failure or reduction in usefulness”.
- Failures or malfunctions (FM): “Partial or full insufficient functioning of an asset (hardware or software)”.
- Outages (OUT): “unexpected disruptions of service or decrease in quality falling below a required level“.
- Disaster (DIS): “a sudden accident or a natural catastrophe that causes great damage or loss of life”.
- Legal (LEG): “legal actions of third parties (contracting or otherwise), in order to prohibit actions or compensate for loss based on applicable law”. (ENISA 2020)
Threat modeling methodologies
Threat modeling involves the process of identifying threats and eventually listing and prioritizing them (Shostack 2014). There are various methodologies on how to perform threat modeling, STRIDE (Microsoft 2009) being one of the most prominent. In the context of future risk/treatment assessments for artificial intelligence (AI) for specific use cases, the threat modeling methodology may involve 5 steps, namely (ENISA 2020) (Sfetcu 2021):
- Identifying the objectives: Identify the security properties that the system should have.
- Study: Map the system, its components and their interactions and interdependencies with external systems.
- Asset identification: Identify security-critical assets that need protection.
- Threat identification: Identify the threats to the assets that will result in failure to meet the above stated objectives.
- Vulnerability identification: determine – usually based on existing attacks – whether the system is vulnerable to identified threats[1].
To develop the AI threat landscape, we can consider both traditional security properties and security properties that are more relevant to the AI domain. The former include confidentiality, integrity and availability with additional security properties including authenticity, authorization and non-repudiation, while the latter are more specific to AI and include robustness, trust, safety, transparency, explainability, accountability as well as data protection[2].
The impact of threats on confidentiality, integrity and availability is presented, and accordingly, based on the impact on these fundamental security properties, the impact of threats on additional security properties is mapped as follows (ENISA 2020) (Sfetcu 2021):
- Authenticity can be affected when integrity is compromised because the authenticity of data or results could be affected.
- Authorization may be affected when confidentiality and integrity are affected, given that the legitimacy of the operation may be affected.
- Non-repudiation can be affected when integrity is affected.
- The robustness of an AI system/application can be affected when availability and integrity are affected.
- Trust in an AI system/application may be affected when integrity, confidentiality, and availability are compromised, as the AI system/application may be operating with corrupt or underperforming data.
- Safety can be affected when integrity or availability are affected, as these properties can have a negative impact on the proper functioning of an AI system/application.
- Transparency can be affected when confidentiality, integrity, or availability are affected because it prevents disclosure of why and how an AI system/application behaved as it did.
- Explainability can be affected when confidentiality, integrity, or availability are affected because it prevents the inference of adequate explanations as to why an AI system/application behaved as it did.
- Accountability can be affected when integrity is affected because it prevents the distribution of verified shares to owners.
- Protection of personal data may be affected when confidentiality, integrity or availability are affected. For example, breach of confidentiality (for example, achieved by combining different data sets for the same person) may result in disclosure of personal data to unauthorized recipients. Violations of integrity (e.g., poor data quality or ‘biased’ input data sets) can lead to automated decision-making systems that misclassify people and exclude them from certain services or deprive them of their rights. Availability issues can disrupt access to personal data in important AI-based services. Transparency and explainability can directly affect the protection of personal data, while accountability is also an inherent aspect of personal data protection. In general, AI systems and applications can significantly limit human control over personal data, thereby leading to conclusions about individuals that directly impact their rights and freedoms. This can happen either because the results of the machine deviate from the results expected by the individuals, or because they do not meet the expectations of the individuals.
After the security properties have been introduced and based on the AI lifecycle reference model introduced and the assets identified, the next step in the considered methodology involves the identification of threats and vulnerabilities. To identify the threats, each asset is considered individually and as a group and the relevant failure modes are highlighted (Asllani, Lari, and Lari 2018) in terms of the security properties mentioned above. By identifying threats to assets, we are able to map the threat landscape in AI systems. Furthermore, the effects of identifying the threat to the vulnerabilities of AI systems are also emphasized by referring to specific manifestations of the attacks. This would lead to the introduction of proportionate security measures and controls in the future (ENISA 2020) (Sfetcu 2021).
Conclusion
In conclusion, while AI holds immense promise for enhancing cybersecurity defenses, it also introduces new and complex challenges. From sophisticated AI-driven cyber-attacks to biases in AI algorithms and issues of accountability, the threats posed by AI in cybersecurity require careful consideration and proactive measures. Addressing these challenges necessitates collaboration between policymakers, technologists, and cybersecurity experts to develop ethical frameworks, enhance regulatory oversight, and promote responsible AI innovation. Only through concerted efforts can we harness the full potential of AI while safeguarding our digital infrastructures and societal well-being.
Addressing these threats requires advanced cybersecurity measures, including the development of AI-driven defense systems that can anticipate and counter AI-based attacks. A robust cybersecurity strategy needs to include AI-specific considerations, such as securing AI data sets and algorithms, monitoring AI systems for malicious activity, and understanding the potential use of AI by adversaries. Collaboration between cybersecurity experts, AI researchers, and policymakers is crucial to creating robust frameworks that ensure the safe and ethical use of AI technologies.
Bibliography
- Asllani, Arben, Alireza Lari, and Nasim Lari. 2018. “Strengthening Information Technology Security through the Failure Modes and Effects Analysis Approach.” International Journal of Quality Innovation 4 (1): 5. https://doi.org/10.1186/s40887-018-0025-1.
- Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science 356 (6334): 183–86. https://doi.org/10.1126/science.aal4230.
- Carter, Richard, and Howard Auken. 2006. “Small Firm Bankruptcy.” Journal of Small Business Management 44 (October):493–512. https://doi.org/10.1111/j.1540-627X.2006.00187.x.
- ENISA. 2016. “Threat Taxonomy.” File. ENISA. 2016. https://www.enisa.europa.eu/topics/cyber-threats/threats-and-trends/enisa-threat-landscape/threat-taxonomy/view.
- ———. 2020. “Artificial Intelligence Cybersecurity Challenges.” Report/Study. ENISA. 2020. https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges.
- Guidotti, Riccardo, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. “A Survey Of Methods For Explaining Black Box Models.” arXiv. https://doi.org/10.48550/arXiv.1802.01933.
- Ingold, David, and Spencer Soper. 2016. “Amazon Doesn’t Consider the Race of Its Customers. Should It?” Bloomberg.Com. 2016. http://www.bloomberg.com/graphics/2016-amazon-same-day/.
- Ito, Joi. 2019. “Adversarial Attacks on Medical Machine Learning.” MIT Media Lab. 2019. https://www.media.mit.edu/publications/adversarial-attacks-on-medical-machine-learning/.
- Lowry, Stella, and Gordon Macpherson. 1988. “A Blot on the Profession.” British Medical Journal (Clinical Research Ed.) 296 (6623): 657–58.
- Mattu, Julia Angwin, Jeff Larson,Lauren Kirchner,Surya. 2020. “Machine Bias.” ProPublica. 2020. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Microsoft. 2009. “The STRIDE Threat Model.” November 12, 2009. https://learn.microsoft.com/en-us/previous-versions/commerce-server/ee823878(v=cs.20).
- Pedreschi, D. 2020. “Artificial Intelligence (AI): New Developments and Innovations Applied to e-Commerce | Think Tank | European Parliament.” 2020. https://www.europarl.europa.eu/thinktank/en/document/IPOL_IDA(2020)648791.
- Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” arXiv. https://doi.org/10.48550/arXiv.1602.04938.
- Sailio, Mirko, Outi-Marja Latvala, and Alexander Szanto. 2020. “Cyber Threat Actors for the Factory of the Future.” Applied Sciences 10 (12): 4334. https://doi.org/10.3390/app10124334.
- Sfetcu, Nicolae. 2021. Introducere în inteligența artificială. Nicolae Sfetcu. https://www.telework.ro/ro/e-books/introducere-in-inteligenta-artificiala/.
- Shostack, Adam. 2014. “Threat Modeling: Designing for Security | Wiley.” Wiley.Com. 2014. https://www.wiley.com/en-us/Threat+Modeling%3A+Designing+for+Security-p-9781118809990.
Notes
[1] Vulnerability identification has not been extensively explored here, as specific use cases must be considered to perform this step.
[2] The AI-specific security properties were based on the HLEG EC IA work on the assessment list for trustworthy AI: https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-intelligencia -artificial-altai-self-evaluation
Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).
Lasă un răspuns