News | May 30, 2022

In AI We Trust, All Others We Model

The NSCAI explained in its 2021final report that AI is a unique human invention that is not a single event or technology. Rather, AI is like what Thomas Edison said of electricity, “It is a field of fields… it holds the secrets which will reorganize the life of the world.” Today, we experience AI daily. We interact directly with digital assistants like Alexa, Siri, and Watson or the IRS has our tax returns analyzed by AI to detect fraud. However, the examples today are minor advances (the tip of an iceberg) in comparison to the transformation that is coming.

This study determined that the greatest impact of AI will be on human decision making. AI can already surpass the ability of a human to process data by many thousands of times and do so at incredible speeds. Through continuous advancement in processing power and software efficiency, this advantage between AI machine and human becomes greater. Processing data is a tremendous strength of AI, but can AI create insights? More importantly, can AI be trusted to act? The answer to both these questions must be a resounding ‘YES’ and we must find ways to make it so.

The US military often teaches the decision-making process as a continuous loop using the acronym OODA for Observe, Orient, Decide, and Act. This process needs to account for the introduction of AI by shifting human decisions earlier in time, creating AI agents that are tested and trusted, empowering AI to act side-by-side with humans, and maintaining overall human oversight and control. We propose this modified process be called GOOD-AI for Guide, Observe, Orient, Decide, Act and Interact. A principal agent such as a commander (or civilian leader) guides the process by determining what needs to be achieved (intent) and setting the parameters, ethics, thresholds (right and left limits), etc. Key to this step, and throughout the process, is the assessment of risk impact and probability to determine when and how agents (human or machine) may act. As human-machine teams interact together solving problems, results must be fed back to the commander so that revised guidance can be fed forward. The interaction ensures continuous improvement, oversight, and the ability to terminate a system if necessary.

Greek mythology tells of Prometheus, the Titan god, bestowing to humans the use of fire so that they could live more comfortably and prosper. The power of fire is neutral; the use of fire continues to be positive or negative based on the desires and actions of people. Today, we have created a new power, Artificial Intelligence (AI), with god-like potential. Unlike fire, AI is not simply a gift, but a tool invented by human enterprise. It is up to us to determine how we will continue to forge AI and use it to improve our lives. This study explores the development and upholding of ethics, standards, and law to ensure that the impacts of AI remain consistent with US values. To make this a reality, the US must continue to lead AI international discourse, practice, and accountability.

This study identifies three key challenges. First, the change created by AI can outpace humans. Addressing this challenge requires that we lead emerging technologies that will produce even more powerful AI, and uphold ethics, standards, and laws that are true to US values.

Second, in its final 2021 report to Congress, the NSCAI stated that presently a national AI strategy does not exist, there is insufficient organizational structure to collaborate, and inadequate resources are in place to win the global race and maintain the US’s position as the leader in AI technology.3 Recognizing these challenges, Congress took sweeping steps to deploy government agencies, enacting 20 separate provisions in the legislation of the National Defense Authorization Act 2021.4 This study recommends that Congress’ actions be considered only a start of the concerted effort necessary to increase momentum and realize the full potential of the nation’s intellect, creativity and determination.

Third, human capital is the limiting factor to retaining the US’s leadership position in AI and other critical technologies. Large investments into early K-12 education by the government and industry are needed to inspire generations of national security entrepreneurs and workers.

Unfortunately, the US’s foremost challenger in AI, China, sees AI as critical to achieving its goal of creating a superior “world-class military”.5 China’s objective is unmistakably to secure China’s superpower status, as stipulated by China’s ambition toward the “Great Rejuvenation.”6 Fundamentally, China’s grand goal is not short-term advancement but long-term global control. It is essential to realize that AI will be the key enabler because of its ability to boost all industries. Furthermore, China’s pursuit of AI objectives is outside the norms of international and US values on which international security increasingly depends. Alone, the US may be unable to outcompete China in terms of sheer numbers of investment, people, or systems. However, the US can, and must, marshal its partners and empower its people to innovate and create a freer and more prosperous world. The US must accelerate implementation of ethical AI to secure the future - America’s and the world’s!

Recommendation Summary

Change created by AI can outpace humans

Leading Emerging Technologies

  • Problem – AI and quantum technologies will fundamentally change human decision making.
  • Solution – Create a commercialization strategy for quantum that paves the way for industry and the government to accelerate out of the lab; Adapt decision making processes to account for AI (GOOD-AI).

Upholding Ethics, Laws and Standards

  • Problem – Standards for ethical AI are not agreed to internationally.
  • Solution – Reinforce ethical AI guidelines like those published by the DoD; Require measures of the trustworthiness of AI; Require ‘red team’ testing before any government or military AI is implemented; Lead global initiatives to generate international norms and eventually law addressing the abuse of military AI; Work towards an international body for AI cooperation and lawful use.

A national AI strategy does not exist

Turbo-charging American Innovation

  • Problem – A national AI strategy does not exist.
  • Solution – Establish a comprehensive national AI strategy to focus the US innovation system with clear priorities, measurable timeframes and goals, a shared mission, and the criticality of partnerships: Grant the NAIIO the necessary authorities to drive national priorities; Include state governments, industry and international partners; Synchronize strategy across all other solutions.

Accelerating AI Adoption with Partners

  • Problem – Existing funding opportunities fail to identify and transition the most disruptive ‘deep tech’ solutions.
  • Solution – Establish new Government Commercial Strategic Investments (GCSI) that encourage longer term startup investment in collaboration with Corporate Venture Capital; Establish more technical Partnership Intermediary Agreements with industry for innovation centers.

Human Capital is the Limiting Factor

Inspire a Generation of National Security Entrepreneurs and Workers

  • Problem: The Unites States is not educating or training enough human capital with the skills needed to meet the challenges of emerging national security technologies.
  • Solution: A STEM Human Capital Development Plan that inspires interest at the K-12 level and incentivizes higher learning leading toward STEM careers and skill-sets needed to continue US competitive advantage; Tuition assistance/forgiveness for critical areas such as computer science, machine learning, quantum engineering.

Read the report →