AI Bias & Identity: Didn't We See This Trainwreck Coming?
AI bias isn’t just a passing headline—it’s been a clanging railway crossing signal demanding attention for years. In this episode, Richard William Bird and special guest David Lee, the “Identity Jedi,” explore how the flaws in AI they warned about five years ago have become today’s reality even after the AI industry made so many promises that it wouldn't happen.
They revisit the early optimism around AI as a tool to eliminate human bias—and contrast it with the present, where algorithms sometimes make baffling hiring decisions, like screening out candidates for simply being over 40.
As an additional stop on this episode's journey, they tackle the bigger question: can AI truly have an identity?
David argues that while algorithms may act with a kind of agency, they’re still just code dressed up to look intelligent. Richard expands the conversation, pointing out that digital identity is about context and trust, not just logins and passwords.
Expect sharp insights, a few laughs, and some existential curveballs as they unpack what we’ve learned (and what we haven’t) about bias, identity, and the future of AI.
Takeaways:
- The podcast dives into the complexities of AI governance, focusing on security and operational risks.
- AI bias is not just a theoretical discussion; it's impacting real-world situations, especially in hiring.
- The philosophical debate around AI and identity raises questions about how we classify AI entities.
- Understanding the context of access is critical; it's not just about who has access, but why.
Companies mentioned in this episode:
- Singular AI
- Saviynt
- AWS
- Cloud Identity
- OpenAI
- Microsoft
Transcript
Foreign.
Speaker B:Hi, I'm Richard Byrd, host of the Yippee Kai AI Podcast brought to you by Singular AI.
Speaker B:Yippee Kai AI is the only podcast out here on the range that focuses on how we govern, control, secure and succeed with AI services, features and agents.
Speaker B:For episode two of Season one, my guest is David Lee.
Speaker B:David is the host of the Identity Jedi Podcast Field, CTO at Savient, co founder of on the Corner Media, and the author of the Only One in the Room.
Speaker B:I have had the great good fortune of my career to have worked with and learned from David over the last several years.
Speaker B:We are going to dive into two topics in this podcast today, making it a little bit longer than usual, but I will tell you that it will be worth your time to listen to the end as we discuss AI Bias, a subject David and I covered more than five years ago on another podcast.
Speaker B:We then dive into an interesting philosophical divide that has emerged about AI and identity.
Speaker B:Isn't AI an identity in the context of security, technology and everyday life?
Speaker B:I look forward to your comments and to you listening in on the next episode.
Speaker B:Make sure to subscribe, like and download the podcast if you are traveling so you can catch up on all the news about how we're using AI and out here on this wild new frontier.
Speaker C:Welcome to Yippee Kai AI, the only podcast that focuses on the operational implications of AI use, from financial and business risk to company resiliency and new security challenges.
Speaker C:We don't focus on how AI is built, we focus on how AI is being used out here in the digital wild wild West.
Speaker C:I'm your host Richard Byrd, Chief Security Officer for Singular AI and today I am here and excited to be spending time with an old and dear friend.
Speaker C:Somebody that, as the old saying goes, has been a fellow good troublemaker with me for quite a while and I will let David introduce himself.
Speaker C:Many of you that follow follow David know him as the Identity Jedi.
Speaker C:You know about his background in the identity space and I can tell you from experience that just like myself, David is not a one trick pony of any variety.
Speaker C:He is a polymath.
Speaker C:He has many many interests.
Speaker C:He excels at many different things.
Speaker C:So to is translate David Lee just to identity I think really foregoes a lot of value and talent and perspective that come with a number of other things that he's involved with.
Speaker C:So obviously I'm a huge fan.
Speaker C:I have to be.
Speaker C:He has my number, he texts me on a regular basis, I text him.
Speaker C:He keeps me honest.
Speaker C:But I will let him dive into a little bit of the details about himself before we talk about some really interesting subjects today as it relates to AI.
Speaker C:As always, we are focused on how AI is being used and we're going to talk a little bit about something that David and I discussed more than five years ago around AI and then bring that forward to a brand new and very interesting philosophical divide that is currently occurring in the AI security, operations and governance and runtime space.
Speaker C:So David, if you could introduce yourself.
Speaker A:Yeah, man, appreciate it.
Speaker A:Hey everybody.
Speaker A:David Lee.
Speaker A:Rich, thanks for having me on, man.
Speaker A:I'm excited to be in this, especially this topic because as you know, I've been texting you about this for years, about all the crap I've been doing with AI and it's truly exciting to me.
Speaker A:But David Lee, I've been in the identity access management space for over 20 years.
Speaker A:Started my career working for a bunch of three letter organizations that do a bunch of scary stuff, but stuff that's needed and then from there, you know, the journey's been crazy up and down over here.
Speaker A:Started a couple of businesses, ended up at CellPoint, AWS, Cloud Identity.
Speaker A:I've been all over the place, startups, bigger companies and you know, my biggest thing that I've everybody asked me why I've been in this space so long, it's because the problems, the problem set.
Speaker A:I mean, at the end of the day that's what I geek out about.
Speaker A:I love solving challenging problems.
Speaker A:And I'm really starting to believe now as I've started to look across other industries, that Identity just provides this unique problem set because it's not just technical and it reaches out across so many different things.
Speaker A:You have to have so many different conversations.
Speaker A:So that's it.
Speaker A:That's me in a nutshell in that I love telling stories, I love making content.
Speaker A:There's a bunch of other stuff.
Speaker A:So we'll, we'll save that for the podcast.
Speaker C:Well, I appreciate that and as a lot of folks know, you know, my background has a very heavy focus or orientation towards ident.
Speaker C:Like I like to remind people I was a multi domain CISO as well as a cio.
Speaker C:I don't just do identity.
Speaker C:I certainly have very strong opinions about the subject, but I think it's really interesting.
Speaker C:Actually a good friend of ours, we all cut a podcast just recently, Ian Glaser and myself and you and me, and I just watched it the other day.
Speaker C:I think he just dropped it.
Speaker C:And I think something that you just said is so interesting to me as to why people who have identity in their background tend to stay focused or have a foot in that space, regardless of where their career takes them.
Speaker C:And it's because of the interesting problems, the diversity of issues that we face.
Speaker C:Right.
Speaker C:But I think the one thing that I try and remind people is, and it is just as relevant for AI as it is for human beings, is that in identity, the identity practitioners or the people that are serving as the boatmen for the transition from the human you to the digital you.
Speaker D:Right.
Speaker C:There has to be a way to cross that veil and become something in the digital space.
Speaker C:And identity is the pathway that that happens on.
Speaker C:So that really suggests that identity from a security domain standpoint has a huge amount of responsibility and value in keeping company organizations and agencies secure, hasn't been treated that way.
Speaker C:And that's a different conversation for a different time.
Speaker C:I'd encourage people to go watch the Saving it podcast on that subject because we covered a lot of ground on that.
Speaker C:Yeah.
Speaker C:Before we really kind of get into some interesting conversation, I think relative to AI agents, feature services, and what's happening in this philosophical argument that I mentioned earlier.
Speaker C:Before we get into that, I want to Revisit our conversation 5 ish years ago about concerns that we had relative to bias as it relates to AI.
Speaker C:I don't know if you remember the feedback that you got after I cut that podcast.
Speaker C:I know the feedback that I got and much of it was, oh, don't worry, that will get sorted out.
Speaker C:We're too early.
Speaker C:I mean, think about five years ago, we're too early.
Speaker C:Like it's not going to be a bunch of people that are, you know, come from a very specific set of educational backgrounds, who have learned very specific sets of coding languages, who all have the same physical appearance, all are heavily oriented towards the male gender.
Speaker C:Like, those problems won't really be problems by the time we get to when AI is actually actualized and operationalized.
Speaker C:And yet in reality now, five years on, we're seeing the consequences of this.
Speaker C:We're actually seeing AI bias in very interesting, significant, and negatively impactful ways.
Speaker C:There's stories circulating about a recent situation where AI services or AI agents made functional determination not to include anybody over 40 as viable candidates for a particular job wreck in a recruiting system.
Speaker C:Also heard about AI systems, hr, AI systems in particular, that are choosing not to acknowledge the achievements of female employees or minorities or so it's really interesting that for all the promises, the great promises of AI bias not being a thing, by the time we got to the promised land, obviously not working out that way.
Speaker C:So I Just want to reflect for just a second on either what your perspective is on it, but what you've seen and heard and certainly what, you know, kind of collectively you and I are concerned of relative to the continuing consequences of that particular piece before we dive into the next subject.
Speaker A:Yeah, it's, it's alarming.
Speaker A:And the reason why it's alarming is because there's people who aren't alarmed or who are surprised at the fact that these things happen.
Speaker A:Because at the end of it.
Speaker A:Right.
Speaker A:I think part of what's hurt, some of what we're doing now is that we always think that technology is magic and that it just makes these things and it's super cool.
Speaker A:And with the onset of like what I would call like this LLM phase, what I'll call this next, you know, five to ten years LLM phase of AI, what really made those models kind of explode the way they did was the achievements that the scientists had made.
Speaker A:Understanding data science and unstructured data and the ability to quickly take these patterns without having to have as much structured data as you had before.
Speaker A:And for those of you that, you know, don't want to get into the math and all this stuff, just imagine if I gave you a deck of cards that was partially in order.
Speaker A:So basically maybe I took like a third of the deck and I shuffled them, but everything else was in order from lowest to highest card.
Speaker A:It would take you short amount of time to sort that.
Speaker A:Now if I took the cards, threw them up in the air and told you to pick them up and then sort them, it's going to take you longer.
Speaker A:Right.
Speaker A:Kind of high level difference between structured, unstructured data.
Speaker A:Well, we got really good at being able to take the unstructured data in those algorithms and that allowed us to kind of then create the higher levels, which is the LLMs.
Speaker A:So why, why am I taking you down this path?
Speaker A:Well, the reason is because you're only as good as the data behind you.
Speaker A:All the wonderful things and all these, these things that seem like magic that we're doing is just data behind it.
Speaker A:And what did we feed these LLMs on?
Speaker A:Well, we gave it the freaking Internet for 10 years and said, go learn about the world and how we do language and go look at the patterns in language and be able to take those patterns and predict and respond to the pattern when somebody asks you something.
Speaker A:Well, folks, go on x for about 10 minutes and you will see what is on the Internet.
Speaker A:Like we, we are a biased society in a biased world.
Speaker A:And so when you feed it on imperfect data, it's going to then take that with you.
Speaker A:There's nothing that magical about the algorithms that are going to take that bias away because the persons and the people that are programmed have that bias.
Speaker A:And so what alarms me the most is that we still look at this and we are shocked that it's a problem.
Speaker A:We saw this coming years ago.
Speaker A:Back then we were talking about it.
Speaker A:I had been doing research and I had just given a.
Speaker A:It was around the time I gave the talk in Identiverse about just from on the identification side.
Speaker A:We were starting to do things with AI and ML to be able to give to the police systems and identicate and how we were having false problems when it came to melanated people.
Speaker A:Because for all of us that are melanated, when you take pictures, pictures, lighting matters, right?
Speaker A:So if you take something in bad light, it's really hard to see and break out those different features.
Speaker A:We were having systems that were mismatching people, leading to people's arrest in real life.
Speaker A:So the use of this data and not understanding like the, the bias quality behind it and just going with it produce those problems then.
Speaker A:And we're still doing the same thing now.
Speaker A:Because to your point, I guarantee you, if you go, if we go walk into any AI, if we go into OpenAI right now, we go walk into their researcher facility where all the best researchers are, they're going to be white and they're going to be Asian and they're going to be men.
Speaker C:Yeah.
Speaker C:I mean, you can't get around that truth because it just is so obvious, Right.
Speaker C:I think that there's something that I've learned that is combined with that truth that exacerbates the problem.
Speaker C:So when we talked about it five years ago, you and I talked about it very much from the DevOps perspective, right?
Speaker C:Which was how are these things going to be built?
Speaker C:And do the people that are building them have the capacity to not instantiate their own viewpoints, their own backgrounds, their own any number of different variables?
Speaker D:Right.
Speaker C:But what I think is interesting is, is that, and you hinted at it, is the reality that there are no perfect data sets.
Speaker D:Right.
Speaker C:And I do think that when we look at the grand promises of AI, it comes with an expectation that the learning data sets for those AIs is pristine, right.
Speaker C:It is trustable, it is high quality, it is high fidelity.
Speaker C:And when you look at the reality of, of data in the world, it's anything but.
Speaker C:In fact, kind of raises this back to your point, like how could we not see this coming?
Speaker A:Right?
Speaker C:Or people are just denying that it happens.
Speaker C:So if just for sake of argument, let's say that we've got 40 years worth of home lending data and within that home lending data there have been 40 years of foul practices, robo signing from just a few years ago or keeping loan applications from being processed for certain neighborhoods if you were black or if you were Asian or if you.
Speaker C:Any number of different things.
Speaker C:Like we treat that data as if it is separate from the biased processes that created it, except inherent within that data because the process was biased is the bias itself.
Speaker D:Right.
Speaker C:It's, it's like saying that, like saying if you just have greasy hands and you touch a mirror, you're not going to leave fingerprints.
Speaker C:Like it doesn't make any sense.
Speaker D:Right.
Speaker C:The, the data itself is biased.
Speaker C:And I've seen this recently in some studies that I think are super fascinating.
Speaker C:There was a study that just recently came out where you know, the open eyes of the world or the open eyes of the world and the Googles and the, and the AWS is the world are declaring, don't worry about hallucinations.
Speaker C:We will get to a point where AI agents and services will be able to self acknowledge or self realize that they're hallucinating.
Speaker C:And I expect that promise to be fulfilled just like the promise of no bias and AI was supposed to be fulfilled five years ago.
Speaker A:Right.
Speaker C:But in kind of looking at that, these researchers, and I really do trust the researchers on this stuff, these researchers dug in and they, they said because the, the learning data that is being pushed into these models is biased with the purely positive view of that data and because that data does not give any.
Speaker C:Those learning sets don't also include negative, controversial or contraventing data.
Speaker C:There is no possible way mathematically for a model to ever be able to tell it was self or evaluate that it was hallucinating.
Speaker C:It can never self discover that.
Speaker D:Right.
Speaker C:Because it is only being fed information that allows it to focus on completing whatever it's been coded for.
Speaker C:It's not given the opposite or the negative variable.
Speaker C:It's like all matter with no antimatter.
Speaker D:Right.
Speaker C:It's like you can't successfully munge that calculation in a way that that engine looks at it and goes I'm sorry, this is bad output.
Speaker D:Right.
Speaker C:So I think it definitely has become a recognition that the AI world is just as biased based upon the data sets as, and maybe more so than it is on the developers.
Speaker A:Yeah.
Speaker A:And it's, it comes down to me when I Look at all of this, right?
Speaker A:And trying to get super deep into this.
Speaker A:But, but it's a, for those of you, like me, who like to go and research it, it goes back to a fundamental truth of having to be able to defend the actual truth and define a fundamental aspect, what is, what is truth.
Speaker A:So it goes back to like logics and proofs.
Speaker A:How do I get from an equation and prove that that equation to be true?
Speaker A:I have to have pillars that are absolutely fundamentally accepted as law.
Speaker A:That's where I start with.
Speaker A:And so from there I've got to take some high level equation and come all the way back down.
Speaker A:And that's how I prove something.
Speaker A:And so that from mathematics to philosophy, you get into these very like spiral conversations of how do you do that, how do you do, how do you defend a state, a viewpoint on any given thing?
Speaker A:You have to then turn around and say, all right, my viewpoint is based on these things.
Speaker A:But for you and I to agree or even disagree, we have to set a standard of communication and saying, here's my viewpoint, you disagree because X equals Y. I agree because Y equals B.
Speaker A:And then we have to start going, okay, well then if X equals Y, then what is that?
Speaker A:And it goes down and down and down.
Speaker A:And so looking at all of this, to your point, it's going to be, I would dare say it's almost mathematically impossible for an algorithm to defend and know itself as hallucinating.
Speaker A:Because based on the data you gave it, it doesn't have the ability to reason and output and look at something, say, that's bad data, how would it know it's bad data?
Speaker A:Based on what?
Speaker A:What does good data look like?
Speaker A:You'd have to give it good data and then bad data, and then feed it and go, okay, yes, I gave you bad data.
Speaker A:But when you're just giving it data to begin with, how does it know what's good or bad?
Speaker A:So how is it going to know that it just hallucinated and gave you something that isn't set?
Speaker A:So it's again now not a data researcher.
Speaker A:Maybe somebody comes on here and goes, well, actually, David, we've created these, hey, great, awesome.
Speaker A:But you, I know that from a pure logic perspective, you have to have something for it to compare against.
Speaker A:And then the question becomes, what did you use to create that comparative model?
Speaker C:Like you're, like you said, I'm not a data scientist, right?
Speaker C:I'm not a, I'm not a great technologist.
Speaker C:I will never admit to being, you know, great at very much of Anything.
Speaker C:But I have always phrased things in a, in a term that I learned back in the military a long time ago, which is if you can't answer my simple questions, you probably can't answer my complicated ones.
Speaker D:Right.
Speaker C:So when I ask simple questions about like these kinds of things and the answer kind of sounds like nonsense, all right, then I get worried about the more complicated questions.
Speaker D:Right.
Speaker C:But, you know, I think there's something interesting here that, that, you know, gets missed.
Speaker C:And certainly you and I have spent a number of years now kind of neck deep and learning everything that we can about AI.
Speaker C:And what I find interesting about this whole idea of.
Speaker C:And this is a nice segue into the topic I really wanted to dive in with you, which a lot of people trumpeting that AI is, you know, going to replace, you know, humans take jobs like this anthropomorphization.
Speaker C:I have to say it slowly because it's a really hard word to like put all out there once of AI giving it human characteristics.
Speaker C:I think when we look at the algorithm aspect, one of the things that I find fascinating is, is becoming very clear and I actually just saw a brand new news story drop today from Microsoft about tensions between them and OpenAI around AGI.
Speaker C:It's becoming very clear that an AI service feature agent that has the capability to munge through thousands of probabilistic scenarios to be able to arrive at a conclusion on what is the most likely output, right.
Speaker C:Or the most likely determination is nothing like human nonlinear thinking.
Speaker C:Human nonlinear thinking has nothing to do with probabilism.
Speaker D:Right.
Speaker C:Human nonlinear thinking is, is all of the, in all of the brain things that we have as human beings that goes with intuition and where my nose is leading me or just how I feel that day, like whatever those components might be, and they're fundamentally different things.
Speaker C:And now this is the good diving board to jump into the next topic, which is.
Speaker C:So it raises this question about and we're going to come into the domains that we've worked together so long and together into identity.
Speaker C:There's this conversation going on that, well, these are, these agents are human actors or these agents are actors that are worthy of an identity.
Speaker C:And there's a philosophical divide that's manifesting around this.
Speaker C:OEM manufacturers of software and SaaS solutions are saying these are service accounts, right?
Speaker C:All the while telling about how all the amazing things that these, you know, AI agents can do that make it better than using just humans, but it's a service account.
Speaker C:And then on the identity and access management side, both in the practitioner and solution space, you have everyone rushing in and going, well, let's just give it an identity, right?
Speaker C:Let's just put it in a directory structure.
Speaker C:Let's just use all the same architectures that we've used for the last 25 ish, 35 ish years and everything will be great.
Speaker C:It's a conversation that terrifies me because I think that, I think that there's a lot of opportunism in the conversation about what best suits my.
Speaker C:Either my viewpoint or it suits my revenue model, or it suits my desire not to actually have to rethink my entire security stack because now things might be different.
Speaker C:And I know you've been, you know, kicking around in this kind of conversation for a while now, so let's just, let's just dive in.
Speaker C:And first of all, the big banner is AI an identity.
Speaker C:Let's just jump into that.
Speaker A:Yeah, it's, it's, it's not this, it's this complicated mix of different things, right?
Speaker A:It is, it is something that looks and acts like an identity but isn't.
Speaker A:And the closest thing that I would model it to, if I talk to a hardcore person says, absolutely, identity.
Speaker A:I was like, the closest thing that it maps to is contingent workers.
Speaker A:Because you are, when you have a contingent worker, you're.
Speaker A:You're bringing them on for a very specific function and for a very specific period of time.
Speaker A:Right?
Speaker A:I don't need this.
Speaker A:There, there's a, there's a function in my organization that I need done, and I only need it done every now and then.
Speaker A:So did you even think of that from a business perspective?
Speaker A:So this isn't a major part of my business where I need to actually have a business function around it.
Speaker A:It is these functions that pop up that I need some way to handle.
Speaker A:It's a surge in my current operations.
Speaker A:So I need a section of, or a subset of people or functions to go and do this thing.
Speaker A:So when I look at that, I go, okay, I compare it to like seasonal workers for like UPS or FedEx or something like that, right?
Speaker A:They know every season, you know, Thanksgiving, holiday season, their function doesn't change.
Speaker A:I still ship packages, but what happens is the volume of packages that, that, that I get outperforms the people that I have to go do them.
Speaker A:So I have to bring in the set of workers to come in and help manage what I'm doing.
Speaker A:Search pricing.
Speaker A:You can look at it as, you know, dynamic upscaling in aws, but this is, in, in real life is.
Speaker A:This is What I do.
Speaker A:So what we've done is, okay, we'll just use sign in.
Speaker A:They don't need the same access that their standard employees do.
Speaker A:Half the time they don't even log.
Speaker A:They don't need to log into those systems.
Speaker A:They don't need to do any of that.
Speaker A:They just need to be able to operate and shadow the current people that are there.
Speaker A:So that way we can scale and take that.
Speaker A:Most of the time with these organizations do is they create shell accounts or they create, you know, some kind of Persona or profile for them just to get them access and loaded.
Speaker A:These people will probably never even log into a system.
Speaker A:They don't even sign them email addresses.
Speaker A:Why?
Speaker A:Never going to email them.
Speaker A:They're going to be here for 90 days, 95 days, whatever, and then they're out.
Speaker A:And then the same thing is going to happen next year.
Speaker A:And so to me, the rise of agentic AI and AI is like, it's the same thing.
Speaker A:It gives us the ability to take certain functions that we have within business and go, I want you to go operate this function.
Speaker A:But instead of just giving it one specific function, you can give it enough guardrails to go, yeah, these range of functions I want you to go do, but only at a specific time and go do these things.
Speaker A:And because we're just so used to, when we think about the way business operates, we're so used to tying that directly to a function or a person or this.
Speaker A:To your point, we're resistant to rethink.
Speaker A:Well, that's not really what this is.
Speaker A:So when people say, oh, it's going to be replacing jobs, that might be the end effect.
Speaker A:But in reality, is it really replacing jobs or is it replacing functions that were happening in the business unit that I don't necessarily need all the time?
Speaker A:What we're going to see is an efficient workforce model kind of coming out and going, well, really what I need you to do, knowledge worker, is these things.
Speaker A:I don't need you to be doing these things.
Speaker A:Let me give that to this set of functions and let me go use you where I, where I can't give those functions to go and do this.
Speaker A:So the conversation I was having earlier today, I, I think, you know, there was a, you know, young man that I just started mentoring and he was in communications and doing social and, you know, writing things like that.
Speaker A:He was like, well, I wanted to get out of this field because I just think that, you know, AI is going to take it over.
Speaker A:And I'm like, I can see your concern and it's valid.
Speaker A:I think the more likelihood what's going to happen is the functions that you do of that will kind of get outsourced and take any.
Speaker A:This thing can write whatever, but the ability to curate that and then make that turn into something valuable.
Speaker A:That part I think is where we were going to need a true knowledge worker and that you would shift from just writing this stuff to going, here's all the stuff that's written.
Speaker A:Here's our engagement server, tracking it.
Speaker A:How do I turn this, what do I do with this to turn this and make this into a valuable.
Speaker A:How do I turn this into a sales lead or a sales opportunity or bring value back to the business with that?
Speaker A:I would say that's the, that's the area where, I mean for now, like AI wouldn't be able to do, but who knows?
Speaker C:Well, you know, the whole conversation about, you know, will I take my job is certainly a whole nother episode.
Speaker D:Right.
Speaker C:The, the, my statement on this is the same as I've said now for the last year and a half, which is there are no tape room operators anymore.
Speaker C:There sure are a hell a hell a lot more compute systems.
Speaker D:Right.
Speaker C:It is every technology evolution creates a redistribution of labor.
Speaker C:And it's not just computers, it's not just cloud, it's not just AI, it's.
Speaker C:It's tractors.
Speaker D:Right.
Speaker C:Like there was a very long period of time where vehicles and horses existed on the same roads for a number of years.
Speaker C:Obviously people who have work that is, is very linear in nature or very process step based kind of falls back into what we talked about a little bit earlier.
Speaker C:Those are things that AI does very well.
Speaker D:Right.
Speaker C:When we get into the non linear things, when we get into complicated, you know, different scenarios that human beings are good at in general, you know, it turns out that AI is actually very, very bad at.
Speaker D:Right.
Speaker C:I've seen a lot of research on the fact that at some point accuracy rates or error rates stay at a certain level until you introduce this next level of information into the learning model.
Speaker C:Then error rates and, and performance, error rates go to 100%, performance goes to zero, like immediately.
Speaker D:Right.
Speaker C:And I don't, I don't necessarily think that that's a.
Speaker C:Well in the future more GPU is going to solve that.
Speaker D:Right.
Speaker C:I think these are some, I think that, not to go back down the mathematics trail because Lord knows I'm bad at math, but, but there are certain things within theoretical mathematics or certain things within really advanced mathematics like planar geometry and that, that, that clearly show that we are bound by a number of laws, mutable laws that cannot be defeated.
Speaker C:And I do think that we're starting to see kind of the manifestation of some of those things.
Speaker C:Now.
Speaker C:Let's just, I want to come back to, you know, we both agree, like I don't think that AI is, is a human proxy.
Speaker C:I think that AI can do human functions.
Speaker C:I think that it's a service account.
Speaker C:I think service accounts are conveniently being used by OEM manufacturers and SaaS providers to basically bypass having to inform customers when they make any changes to their systems.
Speaker C:I said it, I will stand by it, I will argue it all day.
Speaker C:I think it will change with litigation.
Speaker C:But I do see a world in the technology side that wants to play.
Speaker B:Both sides of the game.
Speaker D:Right?
Speaker C:There's a very famous case right now with character AI that's going on.
Speaker C:Google has a very large footprint with character AI.
Speaker C:Young man committed suicide, was listening to this customized bot that he built.
Speaker C:And the argument that that was made and just defeated last month was well, we want this case dismissed because that AI bot has freedom of speech.
Speaker C:So, so we see this, you know, see the technology side that wants to full capitalize on this idea that these things are humans or have human like characteristics.
Speaker C:But then when it comes to providing safety, security, governance, control, they go, well, I don't want to tell you about any changes that I'm making in my agent.
Speaker C:Like we're in a very, very schizophrenic time.
Speaker C:I think in, in AI development and most importantly back to how it's being used in AI operationalization.
Speaker C:So we look at this identity stuff.
Speaker C:One of my operating thesis is that, and I'm saying this as guilty as charged, me hand on heart, personally as part of the community, that one thing that we've done very, very badly, and we're gonna come back to a point that you made to prove it that we've done very badly is entity classification.
Speaker C:Like we have lived in a world for 30 years or plus where everything, we have a hammer in identity.
Speaker C:Therefore every problem looks like a nail.
Speaker C:And so, so we tend to think about everything in identity purely in a workforce envelope.
Speaker C:And, and, and I use this example a couple of weeks ago to say, to prove how bad we are at entity identification or entity classification.
Speaker C:I, I asked somebody, well, how long has NHI been a problem?
Speaker C:And somebody said, well, NHI was created as a category about two years ago.
Speaker C:For those that don't know, let me expand the acronym Non Human identity.
Speaker D:Right?
Speaker C:And said, well, NHI was developed as a category about two years ago.
Speaker D:Now.
Speaker C:David and I have actually worked very, very deeply in the non human identity space.
Speaker C:So we have very strong opinions about this as well.
Speaker C:But I was like, really?
Speaker C:I was like, so non human identities didn't exist before two years ago.
Speaker C:And you can hear everyone like pause for a second.
Speaker C:I'm like, how long have non human identities of every variation, stripe and flavor, from contingent workers to contractors to machine machine accounts to, you know, a widget that's driven by a specific secret?
Speaker C:How long have they existed?
Speaker C:It's decades, right?
Speaker C:And now all of a sudden, eureka, we, we have a new solution for this thing.
Speaker C:Why?
Speaker C:Why?
Speaker C:Because we refuse to acknowledge the fact that each one of those different variations of an identity was different than, than the others.
Speaker C:And because we didn't go through this entity classification exercise, we tend to treat everything like it is a workforce identity.
Speaker C:Now I'm seeing this happen again relative to AI, where people are going, I'll just give it a UID and toss it into a directory and.
Speaker C:And then it raises questions about, well, 95% of those agents aren't yours.
Speaker C:So are you just tagging them with a UID and then putting them in your directory or.
Speaker C:So I'm going to kind of come back to what I'm concerned about.
Speaker C:Effectiveness around the things that are being proposed in terms of your own experience.
Speaker C:So the contingent workers side of the equation, just universally, how great of a job have we done and are we currently doing with contingent workers in most large enterprise companies?
Speaker C:Good, bad, horrible, horrible.
Speaker A:We don't, it's, I'm talking like less than 5% of organizations that out there probably even have a process for how they handle it.
Speaker A:And here's proof to that, right?
Speaker A:You hear this and go, oh God, David, you're being really radical about it.
Speaker A:Cool.
Speaker A:Go to your organization right now and go ask somebody, hey, how do we onboard contractors, how do we handle and manage contingent worker access within our organization?
Speaker A:See how fun that that conversation is?
Speaker A:Watch how many people you talk to in that conversation.
Speaker A:Then here's the kicker.
Speaker A:Ask them who owns this process?
Speaker A:Yeah, have fun with that one.
Speaker A:So this to me, and I hope that this is the inflection point with this as fast as AI is going.
Speaker A:But sometimes we get cynical, maybe not, but it really, truly comes down to the both simple and complex answer of identity and access management.
Speaker A:It is who has access to what.
Speaker A:And the part that we never say and why, like when you break that sentence down, that is a true architecture for how we handle any of this.
Speaker A:To your point about the entity classification, who we like, to your point, we as a industry default to like, oh, it's this human worker workforce over here.
Speaker A:No, who could be a workforce?
Speaker A:It could be a customer, it could be a partner, it could be a service account, it could be an agent.
Speaker A:You need to be able to classify all the different who's.
Speaker A:So I was like, who has access?
Speaker A:What do they have?
Speaker A:What was assigned to them?
Speaker A:How did they get it?
Speaker A:What's the life cycle around that?
Speaker A:To what with that access?
Speaker A:What does that access get them?
Speaker A:The permissions to go do?
Speaker A:What are they accessing?
Speaker A:So I've got to inventory the things that are accessible from the access that I gave an identity.
Speaker A:And why, why do they have that access?
Speaker A:Are they using it appropriately?
Speaker A:It's when you really go back to that and break that down.
Speaker A:Like it literally solves all of these things that we run against.
Speaker A:But it's just, I think sometimes we say that sentence and identity and we just, we stay at the superficial level because you go look at the marketing for every freaking vendor out there.
Speaker A:We help you determine who has access to what.
Speaker A:Do you really.
Speaker A:No, you don't like?
Speaker A:It's.
Speaker A:It's in my, like in a vendor's aspect we do this.
Speaker A:And so with this one, to me, I take, I look at the rise of like the AI agents nhi and really I'll even tie back in this old problem we've seen for a while that you would.
Speaker A:You and I even saw one company like what I call the Persona problem that we see in healthcare and education.
Speaker A:What it comes down to is there is an who the actual actor is.
Speaker A:That's somebody that's doing something and then understanding that actor's ability to access certain resources in specific context.
Speaker A:And to me, the AI and the AI agents is that side of it.
Speaker A:It is just a set of functions and context and access.
Speaker A:And it can be dynamic in what it turns out.
Speaker A:So at any given time it can act like a database administrator and go do these things, but then it can turn around and act like a salesperson and go look at this collateral.
Speaker A:It can act like a product marketer and go create this, but it's accessing all this information.
Speaker A:And so the control to me really isn't on the AI agent.
Speaker A:And we gotta, you know, put that in directory and classify that.
Speaker A:Sure, if you want to, but it's really on, you know, back to the stuff that we don't like to talk about, understanding the data and the access and the items that are being accessed and being able to map that out and do that control.
Speaker A:Like you've said this before, right?
Speaker A:Like when we look at like security architectures, we just started from the outside in.
Speaker A:But like we didn't start from what are we actually trying to protect.
Speaker A:We built a castle, built moat.
Speaker A:We got dragons, we got spears, we got oil, we got tar.
Speaker A:And it's like, great.
Speaker A:We built all of this for what?
Speaker A:What are we protect?
Speaker A:What's behind all this?
Speaker A:What's the reason we're protecting it?
Speaker A:So if we start from there and go, well we got to protect this 10 ounces of gold.
Speaker A:Well, what's the best way to protect it to build up all this stuff?
Speaker A:Maybe it is, maybe it isn't.
Speaker A:Maybe the best way to protect it is like what if I put it on this nondescript mule and I just send it that way because nobody's going to pay attention to it.
Speaker A:Like it's, it's that what we have to come back to.
Speaker A:So I think as we start to see some of these things, we all know the first year is going to be bs.
Speaker A:All these vendors are going to call and say all this stuff, whatever, but it's really digging back down to.
Speaker A:I really expect like data security, posture management, data classification, all these ugly words because they were really hard that we want to do years ago.
Speaker A:I think that's all going to come back up because that's, that's what we're going to need to be able to understand.
Speaker A:And again, who has access to what and what?
Speaker C:Well, I want to make sure that we provide some value to the listeners here relative to kind of the two people.
Speaker C:Two, one person directly in identity still in identity can't get himself free from identity.
Speaker C:One who, one who's the just the masochistic bystander that can't keep himself out of identity topics.
Speaker C:You know, the value point that I would say here is as it relates to the AI where we are already at is we have to have intellectual honesty about how good we have been at identity security specifically for the last 30 years.
Speaker C:Workforce.
Speaker C:Like actually if you get rid of standing privileges and you know, use the tools that are available today, I say it all the time, workforce is solved.
Speaker C:If you get popped for an escalated privileges breach, that was your own fault, right?
Speaker C:In the workforce space, it is mature enough, the solutions exist, the knowledge exists, the best practices exist.
Speaker C:You should never get had based on an identity breach in workforce.
Speaker D:Right?
Speaker C:Contractors, we are terrible at universally customer access management.
Speaker C:We are terrible at universally contingent workers.
Speaker C:Volunteers like At New York City Marathon, a guy told me one time, he was like, everything in the New York City Marathon is digital.
Speaker C:And every one of my employees, except a core staff, the thousands of them are volunteers.
Speaker C:How do I provision them?
Speaker C:Access.
Speaker C:Right.
Speaker C:Like the use cases of different entity classes that we do poorly at are more than there are stars in the sky.
Speaker D:Right.
Speaker C:And yet when I look at the AI space, a lot of the conversations are coming back to a suggestion that all of these sins of the past and poor performance against other security controls, even more than just identity, they're going to magically get better with AI.
Speaker C:And I think that the real issue here is they're going to not magically, but are going to literally get worse because of AI.
Speaker C:AI's capability, its speed, its focus, its lack of indecision bias, its ability to focus on the mission that it's been given and then basically go get it done, inclusive of bypassing or capitalizing on any controls or weaknesses within your environment.
Speaker C:So the obviously identity in our mind is core to security.
Speaker C:I hope the rest of the world catches up on that subject.
Speaker C:But it does raise the question.
Speaker C:You mentioned it with dspm, with data classification, it raises the reality, and this may be the thing that the listeners don't want to hear, but that the AI world will be a world of overlapping controls and overlapping solutions.
Speaker C:We can argue about whether AI specific will stay that way or if it'll just become technology five years from now, just like all other technologies.
Speaker C:But I think that this, you know, your experience, would you be comfortable with any organization simply trying to control its risk and attack surface and exposure surface with only identity?
Speaker A:Absolutely not.
Speaker A:Like I.
Speaker C:Are you sure?
Speaker C:I mean, you got those three letter agencies in your background, right?
Speaker C:Like that would work, wouldn't it?
Speaker A:No, it has to be.
Speaker A:The answer to this, answer to this question has always been it has to be a comprehensive solution.
Speaker A:That, that's it.
Speaker A:So and what I mean by comprehensive, let me be very clear on that.
Speaker A:Like not comprehensive.
Speaker A:Sure.
Speaker A:Maybe you could be put a comprehensive all in one.
Speaker A:But my point is multiple points of failure, right?
Speaker A:You have to have multiple things that are checking and looking at it.
Speaker A:I mean it's, I always use this example, you know, you've heard me use this like you know, thousands of times at this point, right?
Speaker A:I tell people like this is, this is a very simple concept because we do it as human beings naturally all the time, right?
Speaker A:The moment somebody knocks on your door, you walk through a multi point process every single time to authenticate and authorize this person before you let them in your house.
Speaker A:Even once they come in the house, you're still doing that based on some context and basic things.
Speaker A:Who you know, how, how much you trust them, things like that.
Speaker A:If there's a complete stranger at your door, you're going to authenticate who is it.
Speaker A:You're going to look at your phone or whatever you have or your panel to see who it is.
Speaker A:You're going to visually look at the person, and that's the first thing you're doing here.
Speaker A:You're trying to authenticate and go, do I recognize who this person is?
Speaker A:Is there a visual connection?
Speaker A:No, I don't.
Speaker A:So then what happens?
Speaker A:Stress level goes up just a tad.
Speaker A:What's that?
Speaker A:That's your body and your brain going, nope, don't know this person.
Speaker A:Your flight or flight response is kicking in.
Speaker A:I ask, who are you?
Speaker A:What do you want?
Speaker A:Right?
Speaker A:Give me some context, Right?
Speaker A:What I'm doing is figuring out, like, do I need to authenticate you into the house?
Speaker A:Oh, I'm the AC guy, here to go fix your ac.
Speaker A:Oh, crap.
Speaker A:That's right.
Speaker A:I did schedule that today.
Speaker A:What happens?
Speaker A:Hormones come down, stress goes down a little bit.
Speaker A:Let me go open the door.
Speaker A:But this person's never been in a house.
Speaker A:You don't know who this person is.
Speaker A:You come in, you go, hey, the acs right there over in the corner.
Speaker A:That's what they're authorized to go do.
Speaker A:If all of a sudden that AC person tried to walk upstairs and go into your master bedroom, you'd be like, well, the hell are you going?
Speaker A:What are you doing?
Speaker A:Right?
Speaker A:You're not authorized to go there.
Speaker A:I gave you authorization to go here, nowhere else.
Speaker A:If it's your cousin or your brother, your.
Speaker A:Your best friend, totally different conversation.
Speaker A:The authentication still happens.
Speaker A:Who's at the door?
Speaker A:That's.
Speaker A:That's Bird.
Speaker A:Bird.
Speaker A:What's up, man?
Speaker A:What are you doing in Atlanta?
Speaker A:Come on in, right?
Speaker A:Open the door, dude.
Speaker A:Open the refrigerator, grab a beer, whatever.
Speaker A:Sit down, talk, go wherever.
Speaker A:You run the house.
Speaker A:Why best friends?
Speaker A:Your authorization.
Speaker A:And that context is bigger because there is a known what trust within there.
Speaker A:So, like all this stuff we naturally do as human beings all the time, we just don't really think about it.
Speaker A:The same thing has to happen, right, on any kind of system.
Speaker A:So, no, I wouldn't completely trust a system with just identity.
Speaker A:I need all of that.
Speaker A:I need, where's my trust?
Speaker A:Where's my context?
Speaker A:Where's my behavioral patterns, right?
Speaker A:Like, has this person been in my house before?
Speaker A:Oh, they Come over all the time.
Speaker A:Oh, I'm expecting this.
Speaker A:These are all context of things.
Speaker A:There's policies in my home.
Speaker A:What I'm defending, what am I sensitive about?
Speaker A:Right.
Speaker A:Yeah.
Speaker A:Can you.
Speaker A:You can go into the guest bedroom, but are you going to come into the studio where I got thousands worth of equipment?
Speaker A:No.
Speaker A:You know what I'm saying?
Speaker A:Like, you bring your toddler over, it's like, hey, they can go over here, but they're not running in there.
Speaker A:Right.
Speaker A:Like, the stuff that they can break in there, causing me a bad day, same type of thing.
Speaker A:When I'm looking at my systems, I used to always tell people when I used to do consulting, I walk in and they go, how do I get started with apps?
Speaker A:Do I onboard?
Speaker A:First I go, very simple question.
Speaker A:If this application gets popped, who has a bad day?
Speaker A:How bad of a day is it?
Speaker A:If I tell you I got access to XYZ system, you're like, I don't care.
Speaker A:If I tell you I got access to Salesforce, Yeah, I'm probably getting fired.
Speaker A:Yeah, we should probably start with that one then.
Speaker A:Like, same type of thing.
Speaker C:Well, excellent.
Speaker C:Well, I'm going to start driving this particular episode towards the corral to put it up for the evening.
Speaker C:I appreciate your time as always, David.
Speaker C:Actually, I think there's an opportunity, you know, maybe not in this season.
Speaker C:It might be early next season once we get more, you know, experience under our belt relative to things like the authorization plan and delegation and how that kind of plays into the AI space, which I think is going to be really interesting times.
Speaker C:But I really appreciate the time that you.
Speaker C:You've shared with us.
Speaker C:For everyone that's listening, here's what I would recommend.
Speaker C:First of all, obviously, you know, click and subscribe.
Speaker C:We would love you to have you as a part of the.
Speaker C:The Yippee Kai AI listening family.
Speaker C:Certainly.
Speaker A:If.
Speaker C:If you have enjoyed anything that you've heard today, sign up with Identity Jedi, check out what David Lee's doing, and we just appreciate the opportunity to hopefully share a nugget, a piece of information that will be valuable to you in your own journey relative to understanding how we're going to secure AI as we operationalize it.
Speaker C:The last thing that I would leave, if there's anything that.
Speaker C:That you've heard today, I hope what you've heard is regardless of what AI can do and how it's going to change society and what different things it's going to create relative to opportunities, there is no dou that AI operationalization is going to require different thinking different methods, different security strategies relative to keeping the digital world safe.
Speaker C:AI operates just differently enough for us to be concerned about what it may be able to do that we are currently not guarding and protecting today.
Speaker C:And what I really loved about what you said about the, the we need to figure out what we're protecting.
Speaker C:Like, I always like to remind folks, we have spent a good 50 years building security mechanisms and architectures to defend our companies and our organizations from everything on the outside.
Speaker C:We didn't build anything really well to protect us from stuff that's already inside.
Speaker C:Think about that AI service feature of subscription that you just signed up for that is now inside of your corporate estate.
Speaker C:That is now inside of your systems and your organization.
Speaker C:We're not really good at those, those types of security practices yet.
Speaker C:But I do have hope and I do have optimism.
Speaker C:David, anything that you'd like to share before we close out for the day?
Speaker A:I'll just say this.
Speaker A:Listen, there's.
Speaker A:There's still.
Speaker A:There's problematic things with AI, but it's still fascinating.
Speaker A:It's still fun.
Speaker A:It's still a technological advancement.
Speaker A:Don't be afraid.
Speaker A:Be educated.
Speaker A:Lean in.
Speaker A:Let's figure it out together.
Speaker D:All right.
Speaker C:Thanks again, David.
Speaker C:Thanks everyone for listening.
Speaker C:Look forward to catching you all in the next episode.
Speaker C:And happy trails to everybody out there.
Speaker C:Where the circuits hum, the code is written and the thoughts do run.