FEATURE ARTICLE

Developing Leaders of Character for Responsible Artificial Intelligence

Christopher S. Kuennen, Headquarters United States Air Force

ABSTRACT

Who is responsible for Responsible AI (RAI)? As the Department of Defense (DoD) invests in AI workforce education, this question serves as starting point for an argument that effective training for military RAI demands focused character development for officers. This essay makes that case in three parts. First, while norms around responsibility and AI are likely to evolve, there remains long-standing legal, ethical, and practical precedent to think of commissioned officers as the loci of responsibility for the application of military AI. Next, given DoD’s emphasis on responsibility, it should devote significant pedagogical attention to the subjective skills, motivations, and perceptions of operators who depend on AI to execute their mission, beyond merely promoting technical literacy. Finally, the significance of character for RAI entails the application of proven character development methodologies from pre-commissioning education onward: critical dialogue, hands-on practice applying AI in complex circumstances, and moral reminders about the relevance of the DoD’s ethical principles for AI.

Keywords: Artificial Intelligence, RAI, Character Development, Military Ethics, Responsibility

 

Citation: Journal of Character & Leadership Development 2023, 10: 273 - http://dx.doi.org/10.58315/jcld.v10.273

Copyright: © 2023 The author(s). This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

CONTACT Christopher S. Kuennen kuennencs@gmail.com

Published: 25 October 2023

 

Who is responsible for Responsible AI (RAI)? In November 2022, a diverse cohort of Reserve Officer Training Corps (ROTC) cadets, civilian undergraduates, philosophy faculty, and active-duty Air Force personnel wrangled over this question in an echoey ballroom at the University of Texas-El Paso (UTEP). The group was considering a case study involving the application of a machine learning-trained target identification tool, which, through a series of unfortunate events, was implicated in the avoidable deaths of civilians in a combat zone. Students, faculty, and Airmen were invited to consider who was most responsible for the incident—the analyst who used the tool, software developers who designed it, senior leaders who adopted it, the operator who acted upon its output, or someone else? After half an hour of debate, the last word was given by a young Security Forces officer who answered—as if it was obvious—“The commander.”

The concept of responsibility is inextricable from the Department of Defense’s (DoD’s) approach to artificial intelligence. Ever since it published its initial AI strategy in 2018, the DoD has committed itself to “leading in military ethics and AI safety” (p. 8). In 2021, Secretary of Defense Lloyd Austin deemed “Responsible AI…the only kind of AI that we do.” By June 2022, the Department had tasked its components to—among 15 other lines of effort—“Supplement existing DoD AI training efforts with curricula that will enable RAI implementation” (p. 32). In line with the DoD AI Education Strategy, these efforts have largely prioritized educational investment in senior leaders (e.g., Chief Digital and AI Office, 2023), product managers (e.g., Kobren, 2022), and AI developers (e.g., Del Aguila, 2022).

This essay makes a plea to those involved in curating the DoD’s AI education. Their approach must not neglect those who bear the ultimate moral responsibility for RAI—the officers and future officers who will command AI-augmented teams. The capacity for true responsibility is a virtue of character, and thus training for responsibility is tied to the professional character development programs that begin in the services’ pre-commissioning programs. Developing responsible leaders for the future means letting go of the assumption that “ethical algorithms” are a panacea for RAI and equipping future officers with the specific technical competence and moral virtues required for truly responsible action with AI (cf. Kearns & Roth, 2020).

Who is Responsible?

Responsible AI—one of the Department’s original five ethical principles for AI—means “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities” (Joint Artificial Intelligence Center, 2020). This principle underlies the 2023 DoD policy on autonomy in weapon systems: “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (p. 3). However, the adoption of responsibility as an ethical principle for AI goes beyond the demand—popularized by the Campaign to Stop Killer Robots—for “meaningful human control” of lethal autonomous weapons systems. The Department’s understanding of responsibility as essentially dependent on human judgment holds across lethal and non-lethal applications, from collaborative combat aircraft to talent management software. Nevertheless, questions about which humans should be responsible and what kinds of judgment might make them so remain open.

While views on moral responsibility will undoubtedly evolve with increasingly pervasive AI adoption, it seems safe to assume that one primary locus of practical responsibility for military AI will continue to be the leaders employing it. There are long-standing legal, ethical, and practical norms that justify attention on officers, and specifically officers in command, as the responsible agents on AI-augmented teams. No matter how societal norms around responsibility change in the long term, the DoD’s commitment to human judgment as a precondition for RAI means these individuals are a critical audience for RAI education.

Legal

The Constitution’s Appointments Clause provides the legal basis for officers “to command military force on behalf of the government” (Office of Legal Counsel, 2007, p. 77). As extensions of Presidential authority, commanders are responsible for their forces’ exercise of military power. This Constitutional precept is consistent with international humanitarian law, which has upheld the doctrine of command responsibility since the advent of modern warfare, making commanders liable for war crimes committed by their subordinates (Legal Information Institute, 2022). Command responsibility, in turn, depends upon the legal definition of a combatant as someone operating under a “responsible command” (Medecins Sans Frontieres, n.d.). The underlying norm across all these precepts is that the individual military commander represents the state and thus incurs individual responsibility for lethal action in accordance with the common good.

Ethical

Ultimately, the legal principles of command derive from the more fundamental premise of military officership as a public service. Samuel Huntington (1985) defines the “vocation of officership” as a profession because its members manifest distinctive expertise, corporateness, and perhaps most importantly, social responsibility. Like doctors, lawyers, and educators, the military officer is “a practicing expert, working in a social context, and performing a service…essential to the functioning of society” (p. 9). So long as American society entrusts the DoD with its defense, its officers maintain professional responsibility for both securing that defense and doing so in a manner compatible with American values.

Practical

Legal and ethical precedent aside, there is an important practical impetus for holding commanders responsible for military employment of AI. Namely, since AI systems can diffuse responsibility across a variety of stakeholders, and since commanders possess ultimate authority for their use, commanders can reasonably be expected to assume responsibility for particular AI outcomes. In short, the buck stops with the boss. This was the instinct of the Security Forces officer at the UTEP workshop and a fundamental tenet of popular leadership theory in and out of the military. Commentators from Chester Barnard (e.g., 1971) to Jocko Willink (e.g., 2017) have consistently affirmed leader accountability as a defining feature of effective management and organizational performance.

For the US military, the concept of responsibility is essential to the role that officers play as stewards of the common good and public trust. RAI education focused too narrowly on senior leaders, acquisitions specialists, and software developers ignores long-standing norms around how the military thinks about responsibility. Doing so assumes that the ethical risks of irresponsible AI inhere primarily in the technology itself, such that if the DoD trains personnel to account for ethical risk in the development of military AI, then its systems are bound to be used responsibly in practice. This assumption fundamentally misconstrues the role of personal responsibility in ethics and thus underestimates the task of developing officers capable of such responsibility.

Objective and Subjective Responsibility

Contemporary discussion around AI ethics often divides moral philosophy into three types of “theories”—deontology, consequentialism, and virtue ethics—then attempts to incorporate one or more of these theories in the AI system design (e.g., Pflanzer et al., 2022). Deontological ethics prioritizes development of, and compliance with, morally sound rules—for example, “always tell the truth.” Consequentialist ethics, often identified with utilitarianism, focuses instead on morally preferred outcomes—for example, “it is alright to lie, steal, and cheat, if doing so protects an innocent child.” Finally, virtue ethics focuses on exploring the constitution of moral character—for example, “developing a sense of when it is legitimate to mislead someone takes years of experience.” These concepts, refined in moral philosophy, implicitly or explicitly inform AI system engineering from model development and user experience design through deployment and application.

While the three approaches introduced above are often presented as competing alternatives, the etymological root of ethics points to a substantially more inclusive conception. The Greek ethos refers specifically to character or way of life—as in, for example, the “warrior ethos.” Understood as a theory of character, ethics is not only about good outcomes or rules. It is also about what it means for moral subjects to live well. Such a conception of ethics accounts for both the objective and subjective aspects of moral behavior, so that it need not be necessary to directly contrast virtue ethics with the normative prescriptions of rule-based or outcome-based systems. By and large, virtue ethics is concerned with the character of moral agents as they develop over time, not with the objective criteria by which acts can be judged in specific instances.

Given a robust conception of both the subjective and objective aspects of ethics, attempting to design an AI to produce ethical outcomes appears inadequate for the task of developing and deploying ethical AI. Writing thousands of years before the invention of the modern computer, Aristotle (350/2002) discerned that ethical excellence is not something achieved in a single act, but rather an accomplishment judged retrospectively over time (p.12, 1098a). Hence, for Aristotle, the point of ethics cannot be merely to prescribe what to do, but to clarify what it means to be excellent.

What does it mean to be excellent with AI? In the case of military AI, the notion of responsibility can help answer this question in two ways. First, whatever excellence entails is largely relative to particular practices: flourishing for the Buddhist monk does not take the same form as it does for the Air Force fighter pilot. As Huntington observed, the military professional assumes objective responsibility for acting in the public interest as an essential demand of his or her practice. For today’s officer, to be excellent with AI entails accepting responsibility for the public interest…with AI. Officers must be able to use AI while still exercising responsibility for their role as public servants, a task that requires not only technical but moral virtues.

A second, subjective sense of responsibility illuminates another aspect of ethical AI excellence. One established model of moral psychology suggests ethical behavior requires a combination of moral sensitivity, judgment, motivation, and character (Rest, 1994). Rule-based or outcome-based frameworks might give someone a means of deducing a morally preferable judgment, but such theories cannot make someone sensitive to all the morally salient features of a situation or instill the motivation and character to actually follow through on deliberation about those features. These latter three components of moral psychology—sensitivity, motivation, and character—depend in part on whether an agent feels responsible for acting in an ethical way. Thus, in this case, ethical excellence with AI also requires officers to feel responsible for their behavior with AI in the first place.

RAI Training as Character Development

The concept of responsibility is critical to ethical AI in the DoD because ethical AI is ultimately dependent on human character. Achieving RAI requires developing officers competent to accept objective responsibility for using AI in the public interest and capable of feeling subjectively responsible for the ways their teams use AI. What exactly, then, constitutes “curricula that will enable RAI implementation”? Genuine training for RAI must be approached as character development, and the methodologies of character development should be considered the vehicle of any effective RAI curricula.

Just as the military aims to develop technical competence through disciplined, repetitive training, education, and exercising, it has traditionally relied on the same approach to instill virtues of character. From uniform wear and customs and courtesies, to leadership reaction courses and drill and ceremonies, the basic building blocks of military training have long served to form habits of discipline, decisiveness, courage, and respect. Given the military’s long-standing focus on character, Secretary Austin’s remark that RAI is the “only kind of AI we do” may seem less a charge than a foregone conclusion. If DoD already does character development, why should it be concerned that its officers might develop and deploy AI irresponsibly?

In his 2022 book, Is Remote Warfare Moral?, Joseph Chapa explores how technological evolution in warfare changes the ethical demands on military character. We expect courage from our servicemembers, for instance, but technology can change the context in which courage is exercised and thus, for example, how that virtue might manifest in an infantry officer versus a remotely piloted aircraft operator. As Dutch philosopher Peter-Paul Verbeek (2011) has observed, technology plays a fundamental role in framing human choices, mediating our experience of the world, and providing the means through which we act. If having an excellent character means being able to navigate our technologically mediated world ethically and effectively, then excellent character education must involve habituating servicemembers to the responsible use of the technologies they depend on.

The 2020 DoD AI Education Strategy focuses RAI training for the majority of its members on “understanding the ethical issues related to AI and adhering to all relevant regulations” (p. 46). In order to promote the kind of understanding and compliance that is conducive to genuine responsibility, academic knowledge of relevant issues and regulations must be supplemented by a more holistic approach to character development for RAI. To be effective, such an approach should involve the following:

Since responsible action requires both moral and technical competence, current efforts to improve AI literacy across the DoD are a vital part of character development for RAI. Education across the force on AI concepts, applications, and risks gives practitioners a framework to understand their relationships with AI technologies and a vocabulary to dialogue about the appropriate use of such tools. Indeed, critical dialogue should be a goal throughout initial AI literacy education. Research on classroom discourse by Soter et al. (2008) suggests that “the most productive discussions (whether peer or teacher-led) are structured, focused, occur when students hold the floor for extended periods of time, [and] when students are prompted to discuss texts through open-ended or authentic questions…” (p. 373). Critical, open-ended dialogue promotes RAI insofar as it personalizes AI concepts—including ethical risks—and challenges individual preconceptions about AI, promoting subjective responsibility. Alternatively, traditional “click-through” computer-based trainings are inadequate for RAI education.

While discourse about risks and responsibilities is necessary, it is not sufficient for the development of a character capable of RAI. The knowledge servicemembers gain through RAI literacy efforts is only valuable if it translates into responsible action. If the ability to consistently do anything well depends on practice, then RAI requires practice using AI responsibly. What might RAI practice entail? Perhaps the most obvious starting point is to put AI tools in the hands of servicemembers. The kinds of AI tools and the kinds of contexts in which they are applied will naturally vary across services and functional areas, but Don Howard (2014) articulates a general Aristotelean approach: “Training should begin with drill, grow with practice, and be nurtured by example” (p. 163). This training could take many forms. One option might be simply talking through a hypothetical scenario—like the one discussed at the UTEP workshop—that presents trainees with decisions relating to applications or outcomes of AI tools. More advanced scenarios could be integrated into large-scale exercises, where AI failures might present tactical and operational decision-makers with morally weighty problems nested in complex warfighting networks. In any case, practitioners must be able to gain cumulative, first-hand experience aligning the decisions they make with AI with their operational and ethical priorities.

Since character development occurs gradually and within different contexts, conceptual discussion and practice navigating complex situations with AI should be supplemented by regular “moral reminders” about the obligations associated with RAI. The military is already familiar with moral reminders in various forms: oaths of office and enlistment, core values statements and service songs, and periodic refresher training on topics ranging from resource stewardship to suicide intervention. Research conducted for the Oxford Character Project suggests that periodic and explicit exposure to the moral principles underlying our routine behavior can help align actions with values (Lamb et al., 2021). Incorporating the DoD’s five ethical principles for AI into the battle rhythm of AI-assisted operators—whether through prompting by a system itself or external continuation training—is a vital aspect of sustaining character for RAI, as recurring engagement with underlying principles not only keeps them both salient and relevant to decision-makers as AI capabilities and applicability rapidly evolve.

Conclusion

As the military’s service academies, training commands, and professional military education schools seek to educate an AI-enabled force, these institutions must come to terms with what AI means for developing responsible military professionals. While norms around responsibility and AI will certainly evolve, there remains long-standing legal, ethical, and practical precedent to think of commissioned officers as the loci of responsibility for the application of military AI. Given DoD’s commitment to RAI, it should devote significant pedagogical attention to the subjective skills, motivations, and perceptions of operators and leaders who depend on AI to execute their mission, beyond merely promoting technical literacy. Ultimately, the significance of character for RAI entails the application of proven character development methodologies from pre-commissioning education onward: critical dialogue, hands-on practice applying AI in complex circumstances, and moral reminders about the continuing and evolving relevance of the DoD’s ethical principles for AI.

References

Aristotle. (2002). Nicomachean ethics (J. Sachs, Trans.). Focus Publishing. (Original work published ca. 350 B.C.E.).
Austin, L. J. (2021, July 13). Secretary of Defense Austin Remarks at the Global Emerging Technology Summit of The National Security Commission on Artificial Intelligence (As Delivered). Washington, DC. https://www.defense.gov/News/Transcripts/Transcript/Article/2692943/secretary-of-defense-austin-remarks-at-the-global-emerging-technology-summit-of/.
Barnard, C. I. (1971). The functions of the executive (Thirtieth Anniversary Edition). Harvard University Press.
Chapa, J. O. (2022). Is remote warfare moral? Weighing issues of life and death from 7,000 miles. PublicAffairs.
Chief Digital and Artificial Intelligence Office (2023, March 28). Talent and workforce: FY 2023 Training catalog. https://www.ai.mil/docs/2023_CDAO_Course_Catalog_032823.pdf
Del Aguila, C. (2022, April 28). AI accelerator focuses on education. DAF-MIT AI Accelerator. https://www.aiaccelerator.af.mil/News/News-Article-View/Article/3179174/ai-accelerator-focuses-on-education/
Department of Defense. (2018). Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to advance our security and prosperity. https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
Department of Defense. (2020). Artificial intelligence education strategy. https://www.ai.mil/docs/2020_DoD_AI_Training_and_Education_Strategy_and_Infographic_10_27_20.pdf
Department of Defense. (2022). Responsible artificial intelligence strategy and implementation pathway.https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF
Department of Defense. (2023, January 25). DoD Directive 3000.09: Autonomy in weapon systems. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
Howard, D. (2014). Virtue in cyber conflict. In L. Floridi & M. Taddeo (Eds.), The ethics of information warfare (pp. 155–168). Springer.
Huntington, S. P. (1985). The soldier and the state: The theory and politics of civil-military relations. The Belknap Press of Harvard University Press.
Joint Artificial Intelligence Center. (2020). Ethical principles for artificial intelligence. https://www.ai.mil/docs/Ethical_Principles_for_Artificial_Intelligence.pdf
Kearns, M. & Roth, A. (2020, January 13). Ethical algorithm design should guide technology regulation. The Brookings Institution. https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/
Kobren, B. (2022, February 14). Hot topics (Part 4): Data analytics & artificial intelligence. Defense Acquisition University. https://www.dau.edu/training/career-development/logistics/blog/Hot-Topics-Part-4-Data-Analytics-and-Artificial-Intelligence
Lamb, M., Brant, J., & Brooks, E. (2021). How is virtue cultivated? Seven strategies for postgraduate character development. Journal of Character Education, 17(1), 81–108. https://www.infoagepub.com/products/journal-of-character-education-vol-17-1
Legal Information Institute. (2022, August). Command responsibility. In Wex. Cornell Law School. https://www.law.cornell.edu/wex/command_responsibility
Medecins Sans Frontieres. (n.d.). Combatants. The practical guide to humanitarian law. https://guide-humanitarian-law.org/content/article/3/combatants/
Office of Legal Counsel. (2007, April 16). Officers of the United States within the meaning of the appointments clause. In Opinions of the Office of Legal Counsel (Vol. 31). Department of Justice. https://www.justice.gov/file/451191/download
Pflanzer, M., Traylor, Z., Lyons, J. B., Dubljević, V., & Nam, C. S. (2022). Ethics in human—AI teaming: Principles and perspectives. AI and Ethics, 3, 917–935. https://doi.org/10.1007/s43681-022-00214-z
Rest, J. R. (1994). Background: Theory and research. In J. R. Rest & D. Narvaez (Eds.), Moral development in the professions: Psychology and applied ethics (pp. 12–34). Psychology Press.
Soter, A.O, Wilkinson, I. A., Murphy, P. K., Rudge, L., Reninger, K., & Edwards, M. (2008). What the discourse tells us: Talk and indicators of high-level comprehension. International Journal of Educational Research, 47(6), 372–291. https://doi.org/10.1016/j.ijer.2009.01.001
Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
Willink, J. (2017, February 2). Extreme ownership. TEDxUniversityofNevada, Reno, NV. https://www.youtube.com/watch?v=ljqra3BcqWM