Michael KouremetisDean LawrenceRon AlfordZoe CheuvrontDavid DávilaBenjamin GeyerTrevor HaighEthan MichalakRachel MurphyGianpaolo Russo
Abstract As the capabilities of cyber adversaries continue to evolve, now in parallel to the explosion of maturing and publicly-available artificial intelligence (AI) technologies, cyber defenders may reasonably wonder when cyber adversaries will begin to also field these AI technologies. In this regard, some promising (read: scary) areas of AI for cyber attack capabilities are search, automated planning, and reinforcement learning. As such, one possible defensive mechanism against future AI-enabled adversaries is that of cyber deception. To that end, in this work, we present and evaluate Mirage, an experimentation system demonstrated in both emulation and simulation forms that allows for the implementation and testing of novel cyber deceptions designed to counter cyber adversaries that use AI search and planning capabilities.
Michael KouremetisRon AlfordDean Lawrence
Georgios KavallieratosSokratis KatsikasVasileios Gkioulos