Autonomous systems are becoming a key component of cybersecurity, sparking vital conversations about the relationship between human security teams and advanced technology. What level of trust should be granted to an AI system taking autonomous action to stop cyberattacks? At what point do security teams intervene in its decision-making?
Thousands of organizations now use fully autonomous artificial intelligence in cybersecurity, yet the question of how human beings effectively manage AI technology remains critical. Why? Well, have you seen the movie, Terminator? Okay then.
The Move to Autonomous Systems
As threat actors innovate, the cybersecurity industry has broadly continued to take the same approach: Their security teams scramble to craft rules and policies in an attempt to predict future techniques of attackers, usually based on what they’ve done in the past. When an attack is detected, either an automated system issues a pre-programmed action or a human operator runs a series of planned playbooks to undo the attack step-by-step. These typically take too long, prove inadequate and miss part of the attacker’s movements. Blanket response mechanisms fail to contain real world attacks, which are constantly being tweaked and improved by determined and creative cybercriminals.
Due to the complexity of modern digital infrastructure, thousands of micro-decisions now need to be made daily to match an attacker’s spontaneous and erratic behavior. To stand a fighting chance at avoiding cyber disruption, business leaders are recognizing that cybercrime tactics far exceed what can typically be defended by even large teams of human operators. This has led to a growing conversation around looking beyond simple automation and more toward autonomous systems that can independently assess a cyberattack and calculate the best possible action to take in any new threat scenario.
Decision-Making on a Whole New Level
With artificial intelligence in cybersecurity, human operators are raising their decision-making to another level. Instead of struggling to make an increasingly unmanageable number of micro-decisions themselves, humans now preside over the logic, rule and constraints that AI machines should adhere to when making millions of granular micro-decisions at scale. By establishing the constraints and zones in which the algorithms may operate independently, organizations can become comfortable letting the system run on its own within those parameters.
Human operators are no longer setting the rules and policies for specific cyber threats, but are now plotting strategy and business priorities, while setting guidelines for the AI system to act within. But the question still remains: Is it safe?
Will Adopting AI Put My Business at Risk?
Unlike the autonomous cyborgs built with militarized weaponry intent on destruction in the Terminator movie franchise, these autonomous systems give human operators varying degrees of control and oversight. While the journey towards autonomous security is likely to differ according to company size and industry, organizations have a number of possibilities in making the move to autonomous systems that leave control in the hands of humans. In fact, there are multiple ways to control autonomous systems. Below are the four different models of machine learning that allow humans to choose what option they find to be the safest.
4 Models of AI Machine Learning
1. Human in the loop (HITL)
In this model, the human is doing the decision-making. The machine is only providing recommendations, as well as the context and supporting evidence behind those decisions to reduce time-to-meaning and time-to-action for the human operator. Any actions taken are the decision of the human.
2. Human in the loop for exceptions (HITLFE)
Most decisions are made autonomously in this model and the human only handles exceptions. The system requests some judgment or input from the human before it can make the decision regarding any exception. Humans completely control the logic to determine which exceptions are flagged for review.
3. Human on the loop (HOTL)
In this model, the machine makes the micro-decisions and takes all actions. The human operator can review the outcomes of those actions to understand the source of the anomalous – yet contained – behavior. The AI engine is left to make decisions and carry out actions, but at any given point, the human has the ability to review any decisions that have been made.
4. Human out of the loop (HOOTL)
In this model, the machine makes every decision and the process of improvement is an automated closed loop. This results in a self-healing, self-improving feedback loop where each component of the AI feeds into and improves the next, elevating the optimal security state. In context, it takes humans completely out of the loop and lets the machines do all the learning and decision-making.
All four models have their own unique use cases, so that no matter how secure a company’s security may be, the security team can feel confident leveraging a system’s recommendations. These recommendations and decisions are based on micro-analysis that goes far beyond the scale any single individual or team can expect of a human in the hours they have available. Organizations of any type and size, with any use case or business need, will be able to leverage AI decision-making in a way that suits their safety needs, while autonomously detecting and responding to cyberattacks and preventing the disruption they cause.
Protecting Your Business Is a Top Priority
At ATC, we value the safety and security of your business. Today, every business is vulnerable to attack, not just major global brands. The consequences of being unprepared can be catastrophic. That’s why we partner with only the best in cybersecurity to mitigate threats and protect you from existential risks. Our long-time partnership with Darktrace, a global leader in cybersecurity AI, allows us to deliver complete AI-powered solutions to free your business (and the world) of cyber disruption.
We’ve got the expertise, solutions and vendors in place to help you build a defensible, artificially intelligent cybersecurity posture. Contact us to digitally transform your business today. Positive outcomes await.