nexus_behavioral-identity.jpg
Zero Trust

Behavioral Identity as the New Perimeter

John Frushour
/
Mar 22, 2023

Generally, we’d probably all agree that the front line of risk in large enterprises is composed of those warm carbon units that also represent our most important assets: our people. However, it wasn’t always this way. And as with anything, an analysis of our history can be used to forecast our future.

Cybersecurity has been narrowing the breadth of risks to our environment since its inception. Starting circa Y2K, the network perimeter was the primary battleground. Gone were the days of Cliff Stoll raking his keys across the exposed copper of a serial cable, or printing keystrokes as the first documented intrusion detection system (IDS). In were the digital manifestations of the same tripwire alarming. 

Information security practitioners were hyper focused on detection and prevention at the trust boundaries of our enterprise. Firewalls, gateways, honeypots, and other perimeter traps gave way to innovations like security information and event management (SIEM). But still they were not perfect. Attackers would work around them, giving rise to a new battleground, the endpoint.

Things such as “next generation antivirus,” policy-based firewalls, application inspection, and security orchestration became hallmarks of endpoint defense. As with any maturation, new lessons learned were applied to old technologies. As an example, endpoint protection posturing became a back-fed datasource into perimeter defense. This persists today as an active area of exploitation. Most recently, information security has seen a movement, much quicker now, to investigating/protecting/defending individual human actions. This is the epicenter of risk management where most new investments coincide. That’s not to say the other historical areas of information security are forgotten or immaterial–they are very much alive–but identity has become a focused area of protection.

A Future Predicted by Lessons of the Past

Notably, years ago as the battleground transitioned from perimeter to endpoint, a new technology called user behavior analytics (UBA) emerged. It promised a utopia of attack identification, and much like 3D movies or Google Glass–it turned out to be largely impractical. Too much hype with too little reward. It was also incredibly data heavy, requiring a new type of engineer called a data scientist that the world might not have been ready for yet. UBA was focused on attacker identification and threat modeling. An artifact of investment in IDS and intrusion prevention systems (IPS ) where vulnerability exploitation was used to trace activity patterns back to an attacker. UBA flared as did “IP acceleration” and was eventually relegated to the buzzword bitbucket.

Today our collective infosec future might be predicted by these lessons of the past. Data science has moved from the academic to practical worlds, resulting in massive changes to identity enrichment. No longer are identity management platforms just repositories for attribute information, but they have matured to data lakes. More and more data sources, correlation, normalization, and lifecycle management of identity has resulted in rich cloud, and on-premises, repositories. If data is the oxygen of artificial intelligence, then massive containers of identity information can certainly suffice as the noble gas required for UBA.

During that same evolutionary cycle described above, arose the kill chain, and MITRE’s ATT&ACK mapping. It became less important to identify the threat actor and much more important to identify their behavior. Those same identity data lakes described above, though, don’t include information about the attackers (wouldn’t that be nice), but instead rich behavioral and statistical data of our own workforce. 

So what then is the collective future of cybersecurity? Judging from history, and keeping an eye on using new technologies to mature old or forgotten functions, it would seem that using this focus on identity management to profile and baseline the digital behavior of our people (vice the attackers)–is that future. This approach represents a bit of modus tollens in practice. If the behavior of our workforce is all known to be good–but then bad behavior is detected–such a class of behavior could be indicative of an attacker, or at least unwanted behavior worthy of alarm.

Goal: Transparent Identity Proofing

One can easily imagine a world where the promises of UBA, and the data enriched reality of current identity management practices can produce not just exacting specificity around anomalous activity, but also improvements in authentication, integrity, or other foundational principles. If behavioral analytics are applied to the corpus of data around identity attributes (member of, reports to, job code is, location was), per se, then an adept information security team might be able to skip secondary challenges for privileged account management. Or in short, if all the people with this widget display this activity, then don’t challenge a newcomer who exhibits the same. 

Such hypothesizing could surely be prone to spoofing, or replay, but that is why infosec engineers get paid. Another possibility is baselining behavior to identity proof employees (or impostors), similar to the “public domain identity proofing” used by the IRS or state entities when offering social services. Taking that approach further, by including things like language normalization (how we speak), auditory emanation (what we sound like), or even physical attributes like gait or elevation, identity proofing becomes not just easier, but transparent. As an infosec professional will attest to: transparent controls are always the best kind.

Emerging technologies such as quantum resistant ciphers or dynamic encryption might also be areas of maturation. Since dynamic encryption uses variable (changing) cipher suites, why not base the keying material on a behavioral baseline, exhibited only by those who match the identity requirements incorporated in the cipher itself? We know–or at least this blog makes an attempt to convince–that the new battleground for infosec centers on the permissions/access/entitlements/attributes of our people. That information alone though is not enough. 

Continuous, comparative assessment of activity against the digital identity that represents those people is a very realistic future. Finally, and though you may have missed it, this entire hypothesis honors the one thing never even mentioned: zero trust!

John Frushour
CISO

John Frushour has 20-plus years of experience in IT and is the Chief Information Security Officer for the New York-Presbyterian Hospital System. John’s responsibilities include NYP’s security operations center, identity and access management team, vulnerability and forensics team, security engineering and architecture teams, enterprise messaging, authentication services, and more.

Stay in the know

Get the Nexus Connect Newsletter

Latest on Nexus Podcast