{"id":1165,"date":"2026-03-25T09:00:43","date_gmt":"2026-03-25T13:00:43","guid":{"rendered":"https:\/\/tickle.utk.edu\/ise\/?p=1165"},"modified":"2026-03-24T09:03:57","modified_gmt":"2026-03-24T13:03:57","slug":"schelble-studies-ai-human-mixed-teams","status":"publish","type":"post","link":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/","title":{"rendered":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams"},"content":{"rendered":"<p><span data-contrast=\"auto\">Right now, large language model artificial intelligences (AIs) like ChatGPT are tools. Like a calculator or a hammer, you use them to complete one task, then put them away until later.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">However, AIs are gaining increased independence when contributing to collaborative work that human teammates rely on and work from. These AI teammates can quickly ingest and process large amounts of data, leaving humans to focus on skills AI can\u2019t match\u2014like interpreting ambiguous outputs and applying prior experience to new contexts.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cAI presents a great opportunity for teaming because its inherent computational strengths and weaknesses often complement our own,\u201d said Assistant Professor <\/span><a href=\"https:\/\/tickle.utk.edu\/ise\/faculty\/beau-schelble\/\"><span data-contrast=\"none\">Beau Schelble<\/span><\/a><span data-contrast=\"auto\">, who leads the <\/span><a href=\"https:\/\/bschelb.github.io\/ARCSLab\/\"><span data-contrast=\"none\">AI and Robotics for Collaborative Systems (ARCS) Lab<\/span><\/a><span data-contrast=\"auto\"> in the Department of Industrial and Systems Engineering. \u201cAn effective human-AI team should achieve outcomes that either exceed what either could accomplish alone or enable what neither could accomplish independently.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"1000\" class=\"alignnone size-full wp-image-1167\" src=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0360.jpg\" alt=\"\" srcset=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0360.jpg 1500w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0360-300x200.jpg 300w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0360-1024x683.jpg 1024w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0360-768x512.jpg 768w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p><span data-contrast=\"auto\">In September of 2025, Schelble was awarded a cooperative agreement from the United States Army Research Office (ARO) to study how to prevent attacks that undermine team performance in human-AI teams by targeting task accuracy, coordination, trust, and situation awareness\u2014and how to respond after such attacks. Schelble is serving as the principal investigator for the cooperative agreement alongside co-investigators from the United States Military Academy Army Cyber Institute, MAJ Allyson Hauptman, and the University of Michigan, Ann Arbor, Professor Lionel Robert.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Schelble has been studying the human-AI teaming space for nearly a decade. He sees the potential for human-AI teams (HATs) to revolutionize decision-making in manufacturing, nuclear energy, disaster recovery, healthcare, and more\u2026as long as the AIs can be trusted to work in the team\u2019s favor.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cWe still lack awareness of exactly how AI systems work to reach a specific outcome,\u201d said Sarah Mendoza, a PhD student in Schelble\u2019s lab. \u201cYou can ask a human questions to understand their decision-making process, but we can\u2019t do the same for many forms of AI.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"1000\" class=\"alignnone size-full wp-image-1171\" src=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0184.jpg\" alt=\"\" srcset=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0184.jpg 1500w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0184-300x200.jpg 300w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0184-1024x683.jpg 1024w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0184-768x512.jpg 768w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p><span data-contrast=\"auto\">In addition to existing cybersecurity risks, compromised AI teammates could interfere with team dynamics by spreading misleading information, misassigning responsibility, or creating confusion that undermines coordination.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cHuman-AI teams are going to be a common component of the working environment across several industries very soon,\u201d said Yayun Tian, who worked as a software engineer in the healthcare industry for three years before pursuing her PhD in Schelble\u2019s lab. \u201cWhen AI teammates are attacked, it is critical to understand how to support HATs\u2019 ability to identify and mitigate the attack before significant harm occurs.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">All Teammates Have Weaknesses<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">Right now, if an AI tool outputs false or misleading information (commonly known as \u201challucinations\u201d), users typically do not view it as nefarious. They would interpret it as a competency issue: a mismatch between the model\u2019s capabilities and what is being asked of it. They may reduce their use of the tool, but they will not dig deeper to find signs of intelligent, malicious intent.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">That\u2019s a notable part of the problem, since knowing when an adversarial attack began is critical to repairing the resulting damage.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cIf a compromised AI teammate gives one person inaccurate information, that teammate will be less effective because they are working from an inaccurate model of their environment and task,\u201d Schelble said. \u201cA compromised AI teammate could even get human teammates to argue with one another by feeding them conflicting information.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">In such cases, getting the team back on track will take more than simply repairing the AI teammate programmatically or retraining the model entirely.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"1000\" class=\"alignnone size-full wp-image-1169\" src=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0297.jpg\" alt=\"\" srcset=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0297.jpg 1500w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0297-300x200.jpg 300w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0297-1024x683.jpg 1024w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0297-768x512.jpg 768w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p><span data-contrast=\"auto\">Just like humans, current AI tools have trust repair strategies\u2014tactics meant to soothe and reassure teammates after an error. When users tell an AI tool that its previous output was false or misleading, the tool outputs an (often convincing) apology.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Given the longer time frame and the greater emotional impact of the \u2018betrayal,\u2019 humans will likely be much less willing to accept such apologies from previously compromised AI teammates.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cThat AI teammate is there for a reason; the team\u2019s performance is better when it\u2019s there,\u201d Schelble said. \u201cSo, how do you get the team working together effectively again? Our team\u2019s research focuses on understanding how human-AI teams can recognize and respond when an AI may be compromised, and how to reintegrate the AI once its security vulnerability has been resolved.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<h2><b><span data-contrast=\"auto\">Understanding Compromised AI Teammates<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">The first part of the three-year ARO cooperative agreement involves interviewing public safety and cybersecurity experts who frequently work with AI systems of some form, including those from organizations such as the University of Tennessee and the City of Knoxville. These experts will help the team identify factors that influence the level of risk posed by a compromised AI teammate and how different attacks may manifest in a team setting.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">They will also provide insights into how human teammates may react to compromised AIs\u2014sometimes unintentionally.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"1000\" class=\"alignnone size-full wp-image-1170\" src=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0308.jpg\" alt=\"\" srcset=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0308.jpg 1500w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0308-300x200.jpg 300w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0308-1024x683.jpg 1024w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0308-768x512.jpg 768w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p><span data-contrast=\"auto\">\u201cMany of the experts I\u2019ve spoken to think of AI as a really helpful system that has the potential to reduce their workload,\u201d said Mendoza. \u201cBut some interviewees are very, very hesitant to integrate AI into their work. So even within a team, you may have different people assuming a competency issue versus the idea that the AI is betraying them.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">For the next phase, the cooperative agreement team has programmed a commercially available video game with a team task. Human participants will come to the lab and work together with one of two LLM teammates to accomplish the task. One means well, while the other will try to get in the way.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The researchers will evaluate session data to characterize the factors that contribute to human players\u2019 ability to sense when their code-based teammate is working against them. Schelble and the team hope to identify, define, and leverage novel aspects of situation awareness, information-sharing, and shared knowledge present for human-AI teams\u2014at both the team and individual level\u2014that can be augmented to help people identify a compromised AI teammate, then quickly recover from and reverse any actions it has taken.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1500\" height=\"1000\" class=\"alignnone size-full wp-image-1172\" src=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0230.jpg\" alt=\"\" srcset=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0230.jpg 1500w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0230-300x200.jpg 300w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0230-1024x683.jpg 1024w, https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0230-768x512.jpg 768w\" sizes=\"auto, (max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p><span data-contrast=\"auto\">By the end of the three-year cooperative agreement, the researchers hope to have established the field\u2019s first fundamental understanding and guidelines for effectively identifying, preventing, and recovering from compromised AI teammates.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Long-term human-AI teaming success will ultimately rely on human teammates\u2019 ability to recognize when something is off, but Schelble and the team hope this research will point toward a viable system and methodology grounded in fundamental science that teams can use to identify problems.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u201cLike the master caution light on an aircraft flight deck lets you know that there\u2019s a problem, we want to give people the tools they need to augment their team\u2019s ability to recognize and prevent or recover from an attack,\u201d Schelble said. \u201cThat way, they can see through a compromised AI and bring the team back to the ground truth as quickly and efficiently as possible.\u201d<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/p>\n<h2 class=\"orange-mark\">Contact<\/h2>\n<p>Izzie Gall (<a href=\"mailto:egall4@utk.edu\">egall4@utk.edu<\/a>)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Right now, large language model artificial intelligences (AIs) like ChatGPT are tools. Like a calculator or a hammer, you use [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":1166,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[19,3,4],"tags":[310,309,308],"class_list":["post-1165","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-department-news","category-full-time-adjunct-faculty","category-research","tag-ai","tag-arcs-lab","tag-beau-shelble"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering<\/title>\n<meta name=\"description\" content=\"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering\" \/>\n<meta property=\"og:description\" content=\"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/\" \/>\n<meta property=\"og:site_name\" content=\"Industrial and Systems Engineering\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-25T13:00:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1500\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Ashlyn Kittrell\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ashlyn Kittrell\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/\"},\"author\":{\"name\":\"Ashlyn Kittrell\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/#\\\/schema\\\/person\\\/3b0fd11d83b771eba159678fc6920b12\"},\"headline\":\"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams\",\"datePublished\":\"2026-03-25T13:00:43+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/\"},\"wordCount\":1160,\"image\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2026\\\/03\\\/20260226_Engineering-0020.jpg\",\"keywords\":[\"AI\",\"ARCS Lab\",\"Beau Shelble\"],\"articleSection\":[\"Department News\",\"Full-time\\\/Adjunct Faculty\",\"Research\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/\",\"url\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/\",\"name\":\"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2026\\\/03\\\/20260226_Engineering-0020.jpg\",\"datePublished\":\"2026-03-25T13:00:43+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/#\\\/schema\\\/person\\\/3b0fd11d83b771eba159678fc6920b12\"},\"description\":\"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#primaryimage\",\"url\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2026\\\/03\\\/20260226_Engineering-0020.jpg\",\"contentUrl\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2026\\\/03\\\/20260226_Engineering-0020.jpg\",\"width\":1500,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/schelble-studies-ai-human-mixed-teams\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/#website\",\"url\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/\",\"name\":\"Industrial and Systems Engineering\",\"description\":\"University of Tennessee, Knoxville\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/#\\\/schema\\\/person\\\/3b0fd11d83b771eba159678fc6920b12\",\"name\":\"Ashlyn Kittrell\",\"url\":\"https:\\\/\\\/tickle.utk.edu\\\/ise\\\/author\\\/akittre3\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering","description":"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/","og_locale":"en_US","og_type":"article","og_title":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering","og_description":"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.","og_url":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/","og_site_name":"Industrial and Systems Engineering","article_published_time":"2026-03-25T13:00:43+00:00","og_image":[{"width":1500,"height":1000,"url":"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg","type":"image\/jpeg"}],"author":"Ashlyn Kittrell","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Ashlyn Kittrell","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#article","isPartOf":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/"},"author":{"name":"Ashlyn Kittrell","@id":"https:\/\/tickle.utk.edu\/ise\/#\/schema\/person\/3b0fd11d83b771eba159678fc6920b12"},"headline":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams","datePublished":"2026-03-25T13:00:43+00:00","mainEntityOfPage":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/"},"wordCount":1160,"image":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#primaryimage"},"thumbnailUrl":"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg","keywords":["AI","ARCS Lab","Beau Shelble"],"articleSection":["Department News","Full-time\/Adjunct Faculty","Research"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/","url":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/","name":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams - Industrial and Systems Engineering","isPartOf":{"@id":"https:\/\/tickle.utk.edu\/ise\/#website"},"primaryImageOfPage":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#primaryimage"},"image":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#primaryimage"},"thumbnailUrl":"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg","datePublished":"2026-03-25T13:00:43+00:00","author":{"@id":"https:\/\/tickle.utk.edu\/ise\/#\/schema\/person\/3b0fd11d83b771eba159678fc6920b12"},"description":"Beau Schelble\u2019s team studies how humans detect compromised AI teammates and how their impact can be systematically mitigated.","breadcrumb":{"@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#primaryimage","url":"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg","contentUrl":"https:\/\/tickle.utk.edu\/ise\/wp-content\/uploads\/sites\/4\/2026\/03\/20260226_Engineering-0020.jpg","width":1500,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/tickle.utk.edu\/ise\/schelble-studies-ai-human-mixed-teams\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/tickle.utk.edu\/ise\/"},{"@type":"ListItem","position":2,"name":"Schelble\u2019s ARO Agreement to Help Create More Secure Human-AI Teams"}]},{"@type":"WebSite","@id":"https:\/\/tickle.utk.edu\/ise\/#website","url":"https:\/\/tickle.utk.edu\/ise\/","name":"Industrial and Systems Engineering","description":"University of Tennessee, Knoxville","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/tickle.utk.edu\/ise\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/tickle.utk.edu\/ise\/#\/schema\/person\/3b0fd11d83b771eba159678fc6920b12","name":"Ashlyn Kittrell","url":"https:\/\/tickle.utk.edu\/ise\/author\/akittre3\/"}]}},"_links":{"self":[{"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/posts\/1165","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/comments?post=1165"}],"version-history":[{"count":0,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/posts\/1165\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/media\/1166"}],"wp:attachment":[{"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/media?parent=1165"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/categories?post=1165"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tickle.utk.edu\/ise\/wp-json\/wp\/v2\/tags?post=1165"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}