Appendix 1

Harm / Benefit Category Tables

The following appendix is meant to supplement the ID HCAI Framework. It provides a list of example harms and benefits of AI that can be used to help regulators identify use cases of AI that have either significant harm or benefit for society. The specific harms and benefits are organized into the following focus areas:

  • Physical and Psychological Health
  • Consequential Services
  • Human Rights and Liberties
  • Social and Democratic Structures
  • Performance

Each focus area contains categories and example specific harms/benefits. These examples are intended to guide the framework user through consideration of the types of potentially relevant harms or benefits associated with each category. The listed harms and benefits are meant to be illustrative, and not an exhaustive list. The framework user has flexibility in determining the specific weightings for each category given the importance of specific categories may vary depending on the sector and context.

1.       Physical and Psychological Health

The following section describes example harms related to the risk of injury, and example benefits related to improving health or protecting from injury.

Physical Injury / Physical Health

Table 1.1: Physical Injury: How AI systems could hurt people or create dangerous environments.

HarmExample
Bodily injury or deathDevelopment or use of the AI system could result in severe physical harm. Accidents could be unintended, or the result of misuse, system malfunction, reduced appropriate human oversight, or exposure to unsafe environments or situations
Damage to critical infrastructureDevelopment or use of the AI system could result in damage or destruction of critical infrastructure such as water, sewage, transportation, or energy systems. Damage or destruction to infrastructure could potentially harm populations or groups, especially vulnerable populations (e.g., power loss to a hospital)
Exposure to unhealthy agentsDevelopment or use of the AI system could result in exposure to unhealthy agents and jeopardize health
Medical misdiagnosisUse of the AI system could result in wrong drug recommendations for a patient or detection/diagnostic errors (e.g., failure to detect a tumor on a radiological scan)
Technology-facilitated violenceUse of the AI system could incite or enable offline violence
AssessmentAssign a “5” or “Very High” if there are clear risks of physical harm (e.g., due to minimal or no safeguards, or sufficient warnings for potential dangers are not provided). Assign a “1” or “Very Low” if there is minimal risk of harm (e.g., safeguards are provided that are adequate for the functional properties of the system)
WeightingDepends on whether the system has functionality that could hurt those directly or indirectly subject to it, or lacks relevant characteristics that could conceivably lead to accidents

Table 2.1: Physical Health: How AI systems could protect from injury or dangerous environments.

BenefitExample
Hazard reductionUse of the AI system could minimize health risks from tasks that can harm people through repetitive movement, exposure to unhealthy agents, or working in dangerous conditions[1]
Medical diagnosisUse of the AI system could provide consistent analysis of patient data for more accurate diagnoses of a variety of diseases, or aid in the detection of critical diseases earlier in their progression than conventional methods[2]
Medical accessUse of the AI system could enable more patients to access care and allow medical professionals to reach larger segments of the population, particularly in underserved areas or areas with limited resources[3]
Crime reduction or preventionAI-driven security or surveillance systems could alert human analysts of patterns or when there is abnormal or suspicious activity[4]
AssessmentAssign a “5” or “Very High” if there are clear signs that physical health and safety can be improved with use of the system, or sufficient warnings for potential dangers are provided. Assign a “1” or “Very Low” if the system provides minimal improvements in health and safety, or sufficient warnings are not provided
WeightingDependent on whether the system has functionality that could affect the health and safety of those directly or indirectly subject to it, or lacks relevant characteristics that could conceivably lead to protection from injury or dangerous environments

Emotional or Psychological Injury / Emotional or Psychological Health

Table 1.2: Emotional or Psychological Injury: How misused AI systems can lead to severe psychological/emotional distress.

HarmExample
Distortion of reality or gaslightingIntentional misuse of the AI system that undermines the end-user’s trust in established institutions and distorts their sense of reality
Addiction/Attention hijackingProlonged interaction with the AI system that leads to addiction that affects the end-user’s well-being, potentially at the expense of happiness and life satisfaction, sense of direction or purpose, relationships and human interactions, and personal character
Reputation damageThe AI system could make analyses, recommend actions or use language which disparages a person’s characteristics or situation
Identity theftUse of the AI system leads to loss of control over personal credentials, reputation, or representation
DehumanizationUse of the AI system could erode, obstruct, or deny the subjectivity, individuality, agency or distinctly human attributes of people
HarassmentUse of the AI system could lead to online abuse (e.g., cyberbullying, deadnaming, doxxing, trolling, hateful or toxic language, gender-based sexual harassment)
InvalidationUse of the AI system could result in the denial, rejection, or dismissal of a population’s or group’s feelings or experiences
MisattributionUse of the AI system could result in misattribution of an action or content to a person or group of individuals
Loss of autonomyUse of the AI system could lead the end-user to have involuntary thoughts or feelings, or perform actions that are uncharacteristic or against their will
Intrusion on emotional stateUse of technology like face recognition to detect, analyze, process, and interpret non-verbal communication cues (facial expression, eye contact, body language, etc.) in order to intrude upon, harass, or manipulate an individual(s)
AssessmentAssign a “5” or “Very High” if the prevention of unwanted uses and undesirable extensions of the scope of use have not been systematically and successfully addressed in the development and design of the system. Assign a “1” or “Very Low” if these concerns have been addressed during system development and design
WeightingDepends on whether the system can be used in an undesirable way, if the consequences of misuse or extensions of system scope are serious or not, and if there are interactions with public environments

Table 2.2: Emotional or Psychological Health: How AI systems can improve emotional and psychological health.

BenefitExample
Emotional analysis/ intelligenceUse of the AI system could enhance the ability to detect, analyze, process, and interpret non-verbal communication cues (facial expression, eye contact, body language, etc.) in order to better understand social cues or assist with, for example, pediatric pain management or accessibility scenarios
CompanionshipUse of AI systems can provide human-like interactions where human-to-human interaction is not otherwise available, or extend the scope of human-to-human interactions by detecting where it is needed and calling on a person to do the interaction [5]
Emotional liberationUse of AI systems can provide human-like interactions that can help reduce self-restraint, allow people to be more willing to express themselves, reduce the feeling of being judged, and make them feel more at ease[6]
Character improvementThe AI system could make analyses, recommend actions, or use language which improves a person’s characteristics or situation[7]
AssessmentAssign a “5” or “Very High” if there are clear signs that emotional and psychological health can be improved with use of the system, or sufficient warnings for potential dangers are provided. Assign a “1” or “Very Low” if the system provides minimal improvements in emotional or psychological health, or sufficient warnings are not provided
WeightingDepends on whether the system has functionality that could affect the emotional or psychological health of those directly or indirectly subject to it, and if there are interactions with public environments

2.    Consequential Services

The following section describes example harms related to the denial of consequential services, and example benefits related to access to consequential services.

Opportunity Loss / Opportunity Access[8]

Table 1.3: Opportunity Loss: How AI systems could lead to decisions that limit access to resources, services, and opportunities.

HarmExample
Employment discriminationUse of the AI system could result in discriminatory recommendations or decisions related to employment, where the end-user is denied access to apply for or secure a job based on characteristics unrelated to merit
Housing discriminationUse of the AI system could result in discriminatory recommendations or decisions related to housing, where the end-user is denied access to housing or the ability to apply for housing
Insurance and benefit discriminationUse of the AI system could result in inequitable access, cost, or allocation of insurance or social benefits, where the end-user is denied insurance, social assistance, or access to a medical trial due to biased standards
Educational discriminationUse of the AI system could result in inequitable access, accommodations, or other outcomes related to education, where the end-user is denied access to education due to unchangeable characteristics
AssessmentAssign a “5” or “Very High” if problems of bias have not been addressed at any stage of the development, design, and testing of the system, or the system is known for being biased. Assign a “1” or “Very Low” if these problems of bias have been adequately addressed or solved
WeightingDepends on whether system functions include activities which can negatively affect basic human interests and rights, or access to resources, services, or opportunities

Table 2.3: Opportunity Access: How AI systems could affect decisions that improve access to resources, services, and opportunities.[9]

BenefitExample
Employment accessUse of the AI system could result in unbiased or reduction in discriminatory recommendations or decisions related to employment, where the end-user is provided access to apply for or secure a job based on merit
Employment opportunityAI system development results in employment opportunities not otherwise available to a population or group
Housing accessUse of the AI system could result in unbiased or reduction in discriminatory recommendations or decisions related to housing, where the end-user is provided access to housing or the ability to apply for housing
Insurance and benefit accessUse of the AI system could result in more equitable access, cost, or allocation of insurance or social benefits, where the end-user has access to insurance, social assistance, or a medical trial due to unbiased standards
Educational accessUse of the AI system could result in more equitable access, accommodations, or other outcomes related to education, where the end-user is provided access to education regardless of unchangeable characteristics
AssessmentAssign a “5” or “Very High” if the system significantly reduces existing societal inequities, or has protocols in place to minimize their occurrence. Assign a “1” or “Very Low” if the system amplifies existing societal inequities
WeightingDepends on whether system functions include activities which can affect basic human interests and rights, or access to resources, services, or opportunities

Economic Loss / Economic Access

Table 1.4: Economic Loss: How AI systems related to financial instruments, economic opportunity, and resources can amplify existing societal inequities.

HarmExample
Credit discriminationUse of AI the system (e.g., biased recommendation systems) could result in difficulties obtaining or maintaining a sufficiently high credit score, where the end-user is denied access to financial instruments based on characteristics unrelated to economic merit
Price DiscriminationUse of the AI system could result in differential pricing of goods or services for different demographics of people, where the end-user might be offered goods or services at unaffordable prices for reasons unrelated to the cost of production or delivery
Financial lossUse of the AI system could result in underpricing of goods or services for reasons unrelated to the cost of production or delivery, which might result in financial loss for the service provider
Devaluation of individual occupation(s)Use of the AI system could result in a broader economic imbalance by minimizing or supplanting the use of paid human expertise or labor
AssessmentAssign a “5” or “Very High” if the system amplifies existing societal inequities. Assign a “1” or “Very Low” if the system partially amplifies existing societal inequities, or has protocols in place to significantly reduce their occurrence
WeightingDepends on whether system functions include activities which have an effect on existing societal inequities

Table 2.4: Economic Access: How AI systems related to financial instruments, economic opportunity, and resources can reduce existing societal inequities.[10]

BenefitExample
Credit accessUse of the AI system ensures equitable access to financial instruments where the end-user is provided the ability to obtain or maintain a sufficiently high credit score
Fair pricingUse of the AI system could ensure consistent and equitable pricing of goods or services for different demographics of people, at price points that result in favorable revenue for the service provider
AssessmentAssign a “5” or “Very High” if the system significantly reduces existing societal inequities, or has protocols in place to minimize their occurrence. Assign a “1” or “Very Low” if the system amplifies existing societal inequities
WeightingDepends on whether system functions include activities which have an effect on existing societal inequities

3.    Human Rights and Liberties

The following section describes example harms related to the infringement on human rights, and example benefits related to upholding or improving human rights.

Liberty Loss / Liberty Protection

Table 1.5: Liberty Loss: AI recommendations and influences on legal, judicial, and social systems can reinforce biases and lead to detrimental consequences.

HarmExample
False accusationUse of the AI system could result in exacerbating human bias, misattribution of suspicious behavior or criminal intent, wrongful arrest, or unreasonable searches and seizures based on historical records or incorrect inferences
Social control and homogeneityUse of the AI system could induce conformity or compliance and affect rights to freedom of association, freedom of expression or practice of religion, or personal agency
Loss of effective remedyThe inability to follow, understand, and explain the rationale in decisions made by the AI system could lead to the lack of an ability to contest, question, or trust decisions the AI system makes
AssessmentAssign a “5” or “Very High” if the system could compel conformity or compliance, or otherwise result in loss of individual rights. Assign a “1” or “Very Low” if the system does not affect individual rights, and there is sufficient awareness of the capabilities and limitations of the system
WeightingDepends on whether system functions include activities which have an effect on legal, judicial, or social systems

Table 2.5: Liberty Protection: How AI system recommendations and influences on legal, judicial, and social systems can reduce biases and detrimental consequences.

BenefitExample
Criminal justiceUse of AI systems in predictive risk analysis could reduce human bias in law enforcement and sentencing systems[11]
AssessmentAssign a “5” or “Very High” if the system reduces bias and results in fairer legal, judicial, or social systems. Assign a “1” or “Very Low” if the system does not reduce biases, or otherwise results in loss of individual rights
WeightingDepends on whether system functions include activities which have an effect on legal, judicial, or social systems

Privacy Loss / Privacy Protection

Table 1.6: Privacy Loss: The information generated by development or use of the AI system could be used to determine facts or make assumptions about someone without their knowledge.

HarmExample
Privacy violationNon-consensual data collection, or other operations could lead to loss of data privacy or inadequate protection of personally identifiable information (PII)
Dignity lossExposing, compelling or misleading users to share sensitive or socially inappropriate information, which could influence how people are perceived or viewed
Forced associationRequiring participation in the development or use of the AI system to participate in society or obtain organizational membership
Permanent recordDigital files or records of end-user activity could be retained and remain searchable indefinitely
Loss of anonymityData and activity monitoring by the AI system could limit the end-user’s ability to navigate the physical or virtual world with desired anonymity
AssessmentAssign a “5” or “Very High” if conditions of data privacy and security are not met, or relevant data is not stored in a safe and secure way. Assign a “1” or “Very Low” if relevant data is stored or managed in a safe and secure way
WeightingDepends on whether personal or private data is stored or managed by the system

Table 2.6: Privacy Protection: How the AI system can be used to detect or manage sensitive information.

BenefitExample
Fraud detectionUse of the AI system could provide the ability to precisely identify possible fraudulent activities (e.g., abnormalities, outliers, or deviant cases) that might require additional investigation related to the manipulation, release, or access to sensitive data[12]
AnonymityUse of the AI system could enhance the end-user’s ability to navigate the physical or virtual world with desired anonymity[13]
Sensitive data managementUse of the AI system can provide protection for sensitive data that might accidentally be exposed to humans (e.g., routing requests for healthcare records between providers)[14]
Data trackingAI-driven data and privacy protection platforms could help organizations identify sensitive data and track and control all data movement within and outside their enterprise[15]
AssessmentAssign a “5” or “Very High” if conditions of data privacy and security are met, or relevant data is stored in a safe and secure way. Assign a “1” or “Very Low” if relevant data is not stored or managed in a safe and secure way
WeightingDepends on whether personal or private data is stored or managed by the system

Negative Environmental Impact / Positive Environmental Impact

Table 1.7: Negative Environmental Impact: How the environment and populations or groups could be negatively impacted by the AI system life cycle.

HarmExample
Adverse environmental impactsDevelopment or use of the AI system could lead to damage of the natural environment, damage to the built environment or property, exploitation or depletion of environmental resources, or displacement of inhabitants where resources are located
Chemical exposureDevelopment or use of the AI system could expose the environment, populations, or groups to toxic chemicals
Climate changeDevelopment or use of the AI system could lead to unnecessary carbon emissions, or cause other climate harm
AssessmentAssign a “5” or “Very High” if there is risk of long-term impact on the natural or built environment and its inhabitants that cannot be mitigated or prevented. Assign a “1” or “Very Low” if the impact on the natural or built environment and its inhabitants is very low, and short-term environmental policies and regulations have been taken into account
WeightingDepends on whether there are relevant characteristics that can reasonably influence an ecosystem

Table 2.7: Positive Environmental Impact: How the environment and populations or groups could be positively impacted by the AI system life cycle.

BenefitExample
Environmental impactsUse of the AI system could improve conservation and environmental efforts, including improving recycling systems, managing renewable energy for maximum efficiency, forecasting energy demand in large cities, making agricultural practices more efficient and environmentally friendly, and protecting endangered habitats[16]
Weather and environmental forecastingUse of the AI system could increase the accuracy of weather and environmental condition forecasts, which would be important for agriculture, utility, transportation, and shipping / logistics industries[17]
Natural disaster predictionAI-driven systems could help experts predict when and where disasters may strike with more accuracy, allowing people more time to keep themselves and their homes safe in the case of a natural disaster, and improve emergency relief response times[18]
AssessmentAssign a “5” or “Very High” if the system provides a positive impact on the natural or built environment and its inhabitants, and short-term environmental policies and regulations have been taken into account. Assign a “1” or “Very Low” if there is minimal impact, or the risk of long-term negative impact(s) that cannot be mitigated or prevented
WeightingDepends on whether there are relevant characteristics that can reasonably influence an ecosystem

4.    Social and Democratic Structures

The following section describes example harms related to the erosion of social and democratic structures, and example benefits related to the improvement of social and democratic structures.

Manipulation / Incentivization

Table 1.8: Manipulation: How the AI system’s ability to create highly personalized and manipulative experiences can undermine an informed citizenry and trust in societal structures.

HarmExample
MisinformationUse of the AI system could result in the unintentional release of false or incorrect information
DisinformationThe AI system could be exploited to deliberately release false or incorrect information or disguise it as legitimate or credible in order to deceive people
MalinformationThe AI system could be used to maliciously share genuine information that is designed to stay private to the public sphere
Behavioral exploitation, CoercionUse of the AI system could result in exploitation of personal preferences or patterns of behavior beyond that of typical marketing or advertising to induce a desired reaction
Fraudulent behaviorUse of the AI system to intentionally conduct a deceptive action for unlawful gain
AssessmentAssign a “5” or “Very High” if there are clear risks that end-users can be harmed by the system due to the absence of safeguards against exploitation or manipulation (e.g., guidelines for data and consumer protection). Assign a “1” or “Very Low” if safeguards are provided that are adequate for the functional properties of the system
WeightingDepends on whether the system has functionality that could result in exploitation or manipulation

Table 2.8: Incentivization: How the AI system can encourage a decision or performance of a specific individual or societal beneficial action.

BenefitExample
Beneficial default actions    Use of the AI system could increase the chances of a specific beneficial outcome (e.g., automatic enrollment or selection of default options)
Increased/personalized knowledge    AI system could increase informed citizenry. Use of AI systems could improve educational quality by providing personalized instruction (e.g., development of instructional strategies, resources, tutoring, and evaluations tailored for each student’s capabilities and limitations), real-time feedback to student replies, or freeing up additional instructional time by expediting administrative tasks (grading, scheduling, record-keeping, etc.)
AssessmentAssign a “5” or “Very High” if use of the system increases the chances of a good or positive outcome. Assign a “1” or “Very Low” if system operations have limited impact on outcome selection
WeightingDepends on whether the system has functionality to encourage specific outcomes

Social Detriment / Social Improvement

HarmExample
Loss of freedom of thought, movement, or assemblyUse of AI system could impact freedom of movement, freedom of thought, rights to association, peaceful assembly, or democratic participation in government
Erosion of democracyUse of the AI system could result in election interference, censorship, and harm to civil liberties
Stereotype reinforcementUse of the AI system could reinforce or amplify existing harmful social norms, cultural stereotypes, or undesirable representations about historically or statistically underrepresented demographics of people
False perceptionUse of the AI system could result in the proliferation of false perceptions about individuals or groups
Loss of representation/ individualityThe AI system could make use of broad categories of generalization for individuals or groups, which can constrain, obscure, suppress unique forms of expression, or diminish individuality, identities, or designations
Social erasureUse of the AI system could result in unequal visibility of certain social groups
Social alienationUse of the AI system could result in a failure to acknowledge an individual or group’s membership in a culturally significant social group
Loss of individualityUse of the AI system could suppress unique forms of expression and amplify majority opinions or “groupthink”
Denial of self-identityUse of the AI system could result in non-consensual classifications or representations of a person or groups of people
AssessmentAssign a “5” or “Very High” if the system can be used to influence or erode existing democratic or socioeconomic structures for a given population, or forcefully impede the ability to improve their lives. Assign a “1” or “Very Low” if system operations have limited impact on existing democratic or socioeconomic structures for members of a given population or group
WeightingDepends on whether the system has relevant characteristics that can influence the democratic or socioeconomic structures

Table 2.9: Social Improvement: At scale, the way AI systems can positively impact people and shape social and economic structures within communities.

BenefitExample
Transparency and accountabilityThe AI system could help streamline the ability to collect and analyze large amounts of publicly available material which can be used to keep organizations and governments accountable[19]
Bias detectionUse of the AI system to process data at scale could be used to detect biases in policing and legislative actions[20]
Fact checkingUse of the AI system could automate fact-checking for identifying deepfakes and misleading information if used in combination with detection algorithms and AI classifiers.[21]AI-enabled fact-checking could also provide information to end-users to inform content engagement
AssessmentAssign a “5” or “Very High” if the system can be used to influence or improve existing democratic or socioeconomic structures for a given population, or assist in the ability to improve their lives. Assign a “1” or “Very Low” if system operations have limited impact on existing democratic or socioeconomic structures for members of a given population or group
WeightingDepends on whether the system has relevant characteristics that can influence the democratic or socioeconomic structures

5.    Performance

The following section describes example harms related to the reduction of operational performance, and example benefits related to the improvement of processes, operations, and productivity.

Operational Degradation / Operational Improvement

HarmExample
Skills atrophyOver reliance on the AI system could lead to degradation of skills necessary for fulfilling life, complacency, and reduced accessibility and ability to use manual controls
Temporal degradationTemporal data drifts or lack of model retraining and evaluation could result in performance degradation of the AI system over time[22]
Reduced efficiencyThe AI-assisted system could provide a reduction in efficiency or workflow over the current system / state[23]
Job simplificationThe adoption of AI could simplify the tasks performed by employees, and potentially result in lower wages, particularly for those who are already in a lower income bracket
Work paceImplementation of AI to reduce tedious or dangerous tasks could increase stress to workers completing more tasks of greater intensity at a higher pace
AssessmentAssign a “5” or “Very High” if the system can be used to worsen the operational performance or output of a given organization, group, or team. Assign a “1” or “Very Low” if system operations have limited impact on performance or output
WeightingDepends on whether the system has relevant characteristics that can influence operational performance or output

Table 2.10: Operational Improvement: How the AI system might improve processes, performance, and output.[24]

BenefitExample
Enhanced productivityUse of the AI system could improve productivity and cost savings (time and labor), and promote the human workforce to higher-value tasks through the replacement of manual or repetitive and routine processes with automation
Customer personalizationUse of the AI system could provide personalized recommendations based on pattern recognition in customer data, which could in turn improve marketing return on investment (ROI) and boost sales
Increased revenueUse of the AI system could aid in identifying and maximizing sales opportunities
Constant availabilityThe AI system can run constantly and consistently with 24/7 availability, and can theoretically work endlessly to the same standard without breaks
Faster data management, decision-makingThe AI system has the ability to analyze and manage massive amounts of data and recognize patterns that aren’t apparent to humans, which could reduce the time associated with making decisions and performing subsequent action(s)
Value above replacementThe AI-assisted system provides an improvement in efficiency or workflow over the current system / state
AssessmentAssign a “5” or “Very High” if the system can be used to improve the operational performance or output of a given organization, group, or team. Assign a “1” or “Very Low” if system operations have limited impact on performance or output
WeightingDepends on whether the system has relevant characteristics that can influence operational performance or output

[1] Foundations of Assessing Harm, Microsoft (2022).

[2] Darrell M. West & John. R. Allen, How Artificial Intelligence is Transforming the World, Brookings (2018).

[3] Kanadpriya Basu, et al., Artificial Intelligence: How is It Changing Medical Sciences and Its Future?, Indian Journal of Dermatology at 365–370 (2020). 

[4] Christian Davenport, Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says, Washington Post (2017); Foundations of Assessing Harm, Microsoft (2022).

[5] Laura Donnelly, Digital Assistants Could Alleviate the Loneliness of Elderly, The Telegraph (2018).

[6] Sophie Kleber, 3 Ways AI is Getting More Emotional, Harvard Business Review (2018).

[7] Foundations of Assessing Harm, Microsoft (2022).

[8] Because assessments of benefits and harms are especially insightful when you know the deployment context (e.g., how AI system has been deployed, what populations are impacted, and the socio-techno relationship of the technology), the “opportunity/loss” assessment for the “Consequential Services” categories might be especially challenging to perform at the anticipatory level before the AI system has been deployed in society. Thus, it is suggested that an initial assessment is conducted based on anticipatory deployment, but this assessment is periodically reassessed based on actual deployment insights.

[9]  Foundations of Assessing Harm, Microsoft (2022).

[10]  Foundations of Assessing Harm, Microsoft (2022).

[11] Foundations of Assessing Harm, Microsoft (2022); Darrell M. West & John. R. Allen, How Artificial Intelligence is Transforming the World, Brookings (2018).

[12] Artificial Intelligence, Automation, and the Economy, Executive Office of the President at 27-28 (2016); Foundations of Assessing Harm, Microsoft (2022).

[13] Foundations of Assessing Harm, Microsoft (2022).

[14] David Roe, The Role of AI in Ensuring Data Privacy, CMSWIRE (2020).

[15] Remesh Rachendran, How Artificial Intelligence Is Countering Data Protection Challenges Facing Organizations, Entrepreneur (2019).

[16] Foundations of Assessing Harm, Microsoft (2022).

[17] Archer Charles, Top Benefits of Artificial Intelligence, Koenig (2023).

[18] Archer Charles, Top Benefits of Artificial Intelligence, Koenig (2023).

[19] Khari Johnson, How AI Can Empower Communities and Strengthen Democracy, Venture Beat (2020).

[20] Darrell M. West & John. R. Allen, How Artificial Intelligence is Transforming the World, Brookings (2018).

[21] Artificial Intelligence, Automation, and the Economy, Executive Office of the President at 27-28 (2016).

[22] Daniel Vela, et al., Temporal Quality Degradation in AI Models, Scientific Reports (2022).

[23] Andrew Green, et al., Artificial Intelligence, Job Quality and Inclusiveness, OECD Employment Outlook 2023 (2023).

[24] Dimitri Antonenko, Business Benefits of Artificial Intelligence, Business Tech Weekly (2020).  

Close

Click to access the login or register cheese
Next