Episode 1

full
Published on:

15th Jul 2025

How to Tame Your Agentic AI: Insights from Rock Lambros on AI Governance, Security And Nuclear Power Plants

Welcome to the first episode of Yippee KI-AI, hosted by Richard William Bird and brought to you by Singulr AI.

Get ready to ride headlong into the wild, wild world of AI governance with our first-ever guest, Rock Lambros. As the CEO and founder of RockCyber and author of RISE and CARE AI Frameworks, Rock brings a wealth of experience and insight to a conversation about how we manage and govern the rise of AI agents and their behaviors.

Spoiler alert: it's not as simple as hoping that someone else will design their AI agents correctly. Rock points out that while we have the tech to create amazing AI, the real challenge lies in governance and security. We're exploring the complexities of agentic AI and its implications for businesses today, including the pressing need for robust governance models to ensure that our new AI assets don't become liabilities.

So, buckle up, because this episode is packed with insights that might change how you think about AI governance and security.

Episode Title: How to Tame Your Agentic AI: Insights from Rock Lambros on AI Governance, Security, And Nuclear Power Plants

Episode Number: 1

Release Date: July 15th, 2025

Host: Richard Bird

Episode Summary:

In this episode, Richard Bird discusses the complexities of agentic AI governance, control, and security with Rock Lambros. Rock is the CEO and Founder of RockCyber and creator of the RISE and CARE AI frameworks.

Takeaways:

  • In the wild, wild west of AI, it's all about how we govern and control it, not just how it's built.
  • Rock Lambros has a background that combines frontier experience as a CISO with practitioner insight into AI governance.
  • Predictive analytics isn't just a buzzword; it's the key to understanding how agentic AI operates in our world.
  • Our security practices require a serious upgrade to keep pace with the rapidly evolving AI landscape we're entering.
  • Blockchain and AI are like chili and campfires, but can they work together in a practical sense?

Connect with Us:

Support the Podcast:

  • Leave a review on your favorite platform.
  • Share the episode with colleagues or on social media.

Links referenced in this episode:

Companies mentioned in this episode:

  • Singulr AI
  • RockCyber
  • eBay
  • OWASP
  • Global Council for Responsible AI
  • AgentIQ
  • OpenAI
  • Anthropic
  • Google Gemini
Transcript
Richard Bird:

Hi, I'm Richard Bird, host of the Yippee KI-AI Podcast brought to you by Singular AI. Yippee KI-AI is the only podcast out there that focuses on how we govern, control, secure and succeed with AI services, features and agents.

For our inaugural episode, Episode one of Season one, my guest is Rock Lambros.

Rock is the CEO and founder of RockCyber, as well as the author of the RISE and CARE AI Frameworks that offer clear and actionable strategies for integrating and implementing AI strategy and governance in the corporate world.

Rock is a true practitioner's practitioner, someone who has both the knowledge and patience to sit on industry standards committees and advocacy groups like the OWASP AI Exchange and Global Council for Responsible AI, as well as being a seasoned CISO and security professional on the front lines. I'm honored to start this new podcast with Rock as our first guest.

I look forward to your comments and I look forward to you listening in on the next episode.

Make sure you subscribe and like and download the podcast if you're traveling to make sure you catch up on all the news on how we're using AI out here in this wild new frontier. Welcome to Yippee KI-AI, the only podcast that focuses on the operational implications of AI use.

From finance, financial and business risk to company resiliency and new security challenges. We don't focus on how AI is built. We focus on how AI is being used out here in the digital wild, wild west. I am excited.

Episode one, Season one and my guest is Rock Lambros. To be able to have Rock on the show is such a great way to start off this new podcast.

Rock is one of the few people that I know that is associated with anything to do with AI, who is this amazing bridge between kind of the wonky standards people who are exceptionally good at developing and writing all of those types of guidances that we need out here in the wilderness and then the practitioner's eye for all of us, practitioners who are really, really bad at paying attention to the standards and actually implementing them. And RAUC is a great synthesis of both sides of that equation. One of the best that I've ever seen in any domain or any category.

I also love the fact that Rock has experience, as I do, working in the corporate world that gives us that touchstone and that understanding of There's a huge difference between telling me how you built a thing and me figuring out how to operationalize a thing. And nothing ever really matters until it gets into operations anyhow. And we are on that threshold.

We are in this new frontier where AI is being deployed, it is in production, is being used in a number of different use cases, not just LLM, but agentic.

And actually agentic will be quite a bit of the conversation today because the LLM space, the gen AI space is super interesting, but there's a limitation to the value that we can get out of what LLM can do and AgentIQ obviously a lot of messaging, a lot of news stories coming out about how AgentIQ is driving amazing improvements and operational efficiencies and operational costs, as well as the concerns and fears around my job is going to be replaced. I'm sure we're going to touch on that particular item as well today.

But Rock, if you could, as far as sharing with everybody a little bit about your background, where you come from, your experience, so that we can just dive into a conversation about what we really need to be thinking about around getting or about around and governing agentic AI as those types of new challenges are coming over the over.

Rock Lambros:

First of all, you're way too kind. I appreciate the kind words everyone. I'm Rock Lambros, CEO and founder of Rock Cyber.

It's a cybersecurity and now more and more AI consulting company based in Denver, Colorado. And you know, I've been in the IT industry for 30 years, been in cyber security specifically for about 25.

And as Richard kind of came up through the I came up through the technical ranks.

The vast majority of my career was on the practitioner side working with organizations like ebay and components of the Department of Homeland Security on the federal contracting side running really large global security programs.

y hard, spun off RockCyber in:

I started getting really interested in AI, particularly machine learning around predictive analytics and things use cases of that nature.

When IOT started to hit the scene, me in my oil and gas background, what kind of impact that would have in the gas plants, along monitoring systems, along the pipelines and that type of stuff. And that interest has just grown from there.

And you know, I've immersed myself in a lot of the OWASP AI research initiatives, both on the Genai Security project, particularly around the Agent Security Initiative and on the OWASP AI exchange side, which is influencing kind of standards around the EU AI app.

Richard Bird:

Well, it said something that kind of perked my ears up a little bit as old practitioners. I think it's really interesting that sometimes we complicate things in security and there are really simple premises that apply universally.

And you mentioned predictive analytics and a little bit of that in my background as well. But what I find interesting is the simple themes that get lost are.

The reality is that there are only three types of controls in this world, detective, preventative and predictive. And everything in the control space fits into one of those buckets. But predictive has always been the utopian state.

But that might be a really good jumping in point for agentic because when we look at models that are being driven off of basically probabilistic calculation engines going through multiple iterations of what would be the most likely action or determination, the predictive stuff starts to get really, really important. And it doesn't seem to be a space where we really matured a lot over the course of security history. Really.

Do you see kind of an inflection point coming where you know, current state security maturity might be a real problem as we're invoking agentic AI, you know, both external into our organizations as well as developing it internally. And what are those kind of concerns that, that are popping up and some of the things that you're seeing?

Rock Lambros:

Yeah, so I mean a lot of the basics still apply as you mentioned. Right. So whether it's, and I can't believe I'm saying this old school LLM gen AI or agentic AI. Right.

Entitlement management, access control, data governance, segmentation or what would be your trust boundaries within the LLM, that type of stuff also all still apply. Right. And a lot of the basic practices still apply as well. Now that's the what the how may be vastly different in an AI world. Right.

But you know, our, we should have mature practices around kind of solving these problems and how we approach these problems just from a philosophical standpoint. And you know, I think the inflection point comes like particularly on the predictive side and you throw in agentic AI.

Agentic AI kind of takes the, the predictive and it's predicting something to now autonomously taking action to do something about it. Right. And that's where that inflection point is coming.

And that's where you know, humans, you know, the human in the loop concept does not scale in an agentic world. Like you could put in gateways and stuff like that and kill switches for very high risk decisions.

But you know, we need AI to govern AI and I'm a very much a core believer of that.

And we also take it a step back, talking about, you know, blocking and tackling, as security practitioners should absolutely be evaluating how we can use AI to close some of our foundational gaps. Right. I've, I've run into too many CISOs who are like, well, we still don't have the CIS 18 down, right. You know, IG3 down.

And I'm like, well, you could absolutely use AI to cover and close a lot of those gaps, especially from a manpower standpoint, because Lord knows we're not getting more manpower budgets anytime soon. And as a matter of fact, a lot of it's being reduced.

Richard Bird:

Well, I like the foundational aspect that you're mentioning because it's one of the weaknesses that I see. Foundational security. And I don't like to use the term it gets used to often in the industry. Basic security makes it sound easy.

I don't know about you, but there's never been anything easy in my job in security, in the corporate or in the solutions side of the world. And nobody wants to hear about complex. And a lot of people are always like, well, we shouldn't, you know, sell on fear, uncertainty and doubt.

My response is always like, if you're not afraid, you're in the wrong job. We are paid to be professionally afraid. We're just also paid not to show it, nor panic.

Rock Lambros:

Right, right, exactly.

Richard Bird:

But when we look at the, the AI space, it does have some interesting things that are happening around the capitalization on security weaknesses, foundational security weaknesses.

And in my role and working for a company that is focused on AI security and governance, I'm seeing really interesting attack methods, but I'm also seeing interesting behaviors in non malicious agentic AI.

And what I see, and this is really strange to me, what I see is a good agent actually attempts to take the same pathways that bad actors do to exploit systems.

So if they have an instantiated JWT token or JSON associated with an authorization call and the data that's at the end of that authorization call doesn't satisfy the need or the requirement to fulfill the program mission of that agent, they will go find other data elements and as we know at the authorization tier. I was just having this conversation the other day.

In the identity community, we've been talking about the authorization layer forever and what nobody ever does is admits that IAM has no control over authorization. Right?

Authorization was distributed to the winds of DevOps and the developers didn't even associate authorization to users, they associated authorization to apps. Right.

And so now we've got this capitalizable Surface that has no security controls, very little security visibility, a lot of misconfiguration and malformation as it relates to how these things were built. And I have seen agents capitalizing on that to fulfill their mission.

Which starts to really smudge that line about is it a bad actor, is it a good agent?

I know you've, you've done so much work and recently had an article, you know, kind of diving into these control structures and these governance expectations and demands.

What, what are you seeing in terms of, of progress, at least in thinking that's not necessarily in, in action around these concerns about these kind of ungoverned, unwell managed surfaces that agentic AI can capitalize on.

Rock Lambros:

So people fail to realize or fail to remember, I should say, in their day to day activities that by nature our AI models were designed to make us happy, to fulfill our requests and without getting too technical, that in combination with how LLMs operate on, you know, from a transformer model perspective is what causes hallucinations fundamentally.

And so now you take that into the agentix space where agents can literally rewrite their own code, potentially even their mission may change on the fly depending on the context of the tasks that they're trying to accomplish at the time.

And we do need, I don't want to say a brand new governance model, but we do need kind of AI oversight like almost like a governing agent that is as deterministic as you can get to manage all these non deterministic AI agents. So I'm seeing a lot of research around that, around using what I call Janus systems around having that actor monitor type of model.

Janus is the Roman deity, like two faced Roman deity to where again you have that but something non deterministic agents winning agent is relatively deterministic. And then you know, then I'm seeing research around leveraging another cringy buzzword, distributed ledgers, blockchain, ethereum.

Pick your flavor of choice to be able to log agentic actions. Right. So that way you kind of have a non reputable and an immutable source of being able to trace back what the AI agent did and when and how.

And then there's a lot of research going on around the authorization lay early people were like oh well, just use OAuth for interagent communications. No, absolutely not. Right.

OAuth depends on almost like a human invoking a scope and we're talking about microsecond scopes potentially and then it's genetic chain, right. When agents are talking to other agents, that's a Hard enough problem to wrap your arms around when you control the entire agent ecosystem.

What happens now when you're integrating with third party agents and other and other agentic ecosystems, you know, and that chain, right, that authentication chain up and down the tree, it's not an easy solution to solve, right? So there's a lot of research going on around that. There's a lot of progress, but we got a long way to go.

Richard Bird:

Let's dig in a little bit on the blockchain thing because this is something that I've been both researching myself as well as seeing it manifest and pop up in a number of different places.

Now before I dive into this, I always like to, let's just use blockchain as one of the great galactic promises of awesomeness that was supposed to happen several years ago and solve all of these problems. And obviously it kind of didn't, right?

Because I think, frankly, I think as you look at the history of blockchain, for a very long time, when the blockchain companies were blowing up and there was tons of money and valuations were popping, a lot of stuff that was happening in blockchain was an answer for a question that nobody ever asked, right? Or a solution for a problem that blockchain could fix.

But the way that you would have to re architect entire global financial systems or get counties to pick up the budget to turn all of those documents about our births and our deaths into digital format, it just wasn't realistic.

But when we look at blockchain today, it does seem like there's an interesting usable reality with blockchain at a minimum, the audibility, auditability component, right? But the, there's also a lot of really interesting things as we're looking at kind of the entire cryptographic spectrum of this.

How do we tag these agents, how do we track these agents, how do we, Lord forbid, how do we understand which agent delegated to which agent with no human intervention? I'm not really certain.

I mean, you mentioned it, but how quickly is this kind of intersection of these two grand technology evolutions, blockchain and AI, coming together practically?

Rock Lambros:

I haven't seen anything in the market yet. I mean, a lot of it is research papers and. But I, but I see the use case, right? I see the path, the technology's there.

It's a matter of stitching it together and having it be relatively affordable. Because that was also one of the problems of original blockchain distributed ledger systems, right? Or kind of the computational power behind it.

Now we also have better just native code, right? Like Python libraries and stuff like that that can interact with the ledger better.

So you know, I think it'll start picking up especially as more and more of the regulations come down the pipe where you have to have explainable AI and it's got to be immutable. Right. Because as we know, as we've mentioned, LLM Genai is traditionally non deterministic.

So being able to have that, this is when AI agent A delegated AI agent B. Here were the parameters it tossed across, the instructions it tossed across, and here's also the data that came back.

So being able to have that, that chain of custody, if you will, I think it's going to become huge, especially in higher risk systems. So imagine autonomous, autonomous vehicles. Right. Credit systems, all that kind of stuff.

Richard Bird:

Yeah, absolutely. I do definitely agree with you on that. But it does also suggest something that I talk about frequently.

Typically I talk about it in reference to security, which is yes, there's a very large percentage of AI stuff that is a repeat of a historical pattern that we've seen forever. Right.

From a security perspective, however, there's just enough that's different that does suggest that current state security architectures are extremely problematic in relation to their interaction with AI. Right. So if, as an example, if. Let's just say I've embarked on a zero trust effort and within the boundary.

Within the trust boundary and thank you to pat opet at JPMorgan Chase about making that a very large topic today in the security world because I think it was much needed.

But if I'm doing the application of zero trust within my own boundaries and I've built the security solutions, architectures and stacks to be able to create that. But 95 plus percent of my exposure to AI agents and services is external to my organization and I am allowing that to come into my digital estate.

And yet it Those agents or AIs have no guarantee of being conditioned to zero trust. Have I sub optimized my own zero trust efforts? Kind of coming back to are there.

You know, one of the things that I talk about frequently is AI security different, and I phrase it that way, there is a certain percentage that is different and within that percentage there is a percentage that is radically different. And it may only be 1 or 2 or 3%, but that 3% could end up killing us all corporately in terms of our security posture.

And yet I think operationally we come back to the operator's seat. The one thing that people really, really hate is radical change.

Heck, they don't even like, like basic change radical Change or radical restructuring or rearchitecting seems to be necessary. Maybe not immediately, but certainly down the path.

And that seems to be where the big divide or gap will come between that standards direction and the actual operational implementation. Like how, how big do you see the gap currently and do you anticipate that it's going to grow bigger as we learn more and can we catch up?

I don't know if we're going to be able to catch this train. I'm just kind of curious.

Richard Bird:

Yeah, I think the gap exists now because there's just a lack of awareness and knowledge right. Across just general security practitioners. Right.

There's a lot of talk amongst those who are really focusing on AI, AI security, AI governance around that. But you know, from the day to day practitioner who may be listening to this call. Right.

And I'm not saying it's a bad thing, but there's just like a level of just not awareness, ignorance, lack of education there around just how to effectively bring AI into your environment.

You know, I also think you made a great case for AIs bombs or AI BOMs, which CISA just released kind of a V1 working version of a model led by Helen Oakley and some others. I think they called it Tiger or Tiger Team or something like that.

But go check it out on the, on Cecil's website because you're right, like 90 plus percent of our of the AI that we have in our environment today are from external third parties. Whether it be we're building models, we're leveraging the OpenAI API or the Anthropic API to do the heavy lifting.

Whether it be you've got Google Gemini and it was automatically turned on within your Google Workspaces account within the past three months. Right. Most of the use cases across enterprises are via external third parties.

Richard Bird:

Well that's just an interesting reality that I don't think people are clearly understanding. You mentioned something earlier that I thought was, was interesting about blockchain, the cost of blockchain.

We are in the most highly subsidized technology evolution ever. And I don't know about you, but I use way more than my 20 bucks a month on my subscription for ChatGPT.

You know, and I saw something recently about, you know, there was the whole as soon as you say thank you to a, you know, OpenAI agent or you know, to LLM prompt, it costs so many dollars.

But I did see a, a study that came out and said that actually the cost of a basic prompt is approximately running your microwave oven for 30 seconds to warm something up.

And obviously if we, you know, take that to, you know, the exponential math associated with the number of prompts that are happening in any given minute, like that's staggering. So the cost pieces of this AI evolution also don't seem to be coming into perspective.

We're already, as users being desensitized to it because we're, you know, we're getting that, that old $5 Uber rate from 10 years ago. Right. And, and yet in the corporate side, there's.

Corporates are obviously coming square into how much this stuff really costs, whether it's business operationalization, security.

Even if, even, even if you're only securing at the edge, which functionally isn't really security for AI, you're seeing exponential growth in the number of agents or services that are transacting, which means volumetric changes in your subscription base and your costs are going up. So the mechanics. Let's talk a little bit about this from the operator side of the equation.

The mechanics suggest that there are some real sustainability problems for the continued growth of AI across every different version of it. Right. Like, are you going to be able to find staff as we. I don't know.

I don't know if you've seen the Meta things, but the signing bonuses and the enrichment bonuses that meta's giving for OpenAI engineers are just astronomical. And it does seem bubblish to me without freaking anybody out in the markets. Right.

Not that I have any sway on that, but it does seem very bubbleish to me.

And the old statement, you know, by an old, you know, Secretary of the treasury, there's a tremendous amount of irrational exuberance going on right now as it relates to all the mechanics for AI. What, what are your thoughts about the.

Either the events or the situations or the realities that kind of begin to bring this all back a little closer to Earth. Do you think there are event horizons out there where we start to be a little bit more rational about the subject of AI?

Rock Lambros:

Labor's got to be one. And what does that look like? Right? So like the bubble that you mentioned, you know, I think it's pretty typical.

We saw a similar bubble with ccies back in the day, could write their own checks. Really sharp cloud architects back in the day could write their own checks.

Right now we're seeing that on the AI, developer, engineer, architect, particularly research side. So I think that'll all normalize out. But power, like, as you mentioned, right.

If your organization cares anything about esg, even has a ESG policy that. That's barely worth the paper. It's printed on. You've got to account for ESG in your AI governance program. Right.

And when I talk to companies, we absolutely talk about that. And it's like, well, I haven't even thought about that because it's someone else's power typically. Right.

Whether it be an AWS or OpenAI or something like that, they're not running it in their own data centers. And you know, so I think the power component, if there's going to be a limiting factor, it is going to be how do we power cool do all of this? Right.

GPUs will get cheaper, better performance, they'll get, you know, they'll continue to get more and more commoditized, but it's still going to be, but with that, with, with that commoditization comes greater capability. Right. And you're still going to need the power to, to run all that. And now, you know, risk getting a little bit political here.

But there's a reason why Microsoft is spinning up Sigma island for nuclear power to start powering Azure data centers that are going to be running AI. Right. So that's an issue that we have to figure out. And as we stand today, globally nuclear might be the only thing that has the capacity to do it.

Take all the debates, the safety debates, the political debates out of it, you know, how close are we to fusion versus fission, whatever. But it's something that we seriously need to consider.

Richard Bird:

Yeah.

ently on plan between now and:

When everyone's running around talking about how amazing AI is going to be for everything, sometimes I don't think people are paying attention to the fact that three large corporations that currently control almost everything in technology will also be controlling almost everything in AI.

Rock Lambros:

You know, you might get a lot of hate comments for this, but send them my way. What happens in a worst case scenario in our great kids? Compromise. Who's getting, who's getting prioritized for power? Is it those big data centers?

Is it our homes? Right. That's something that we seriously need to consider as well.

Richard Bird:

Well, you know, you and I both have energy backgrounds and I actually wrote about this just recently that, you know, kind of priority delivery stacks were going to change, then premium power rates will escalate. For, you know, the common person, all for us to subsidize the delivery of power to these big GPU AI enabled data centers.

There are definitely a lot of potential economic consequences that are outside of the realm of all the other things that we've talked about.

The nuclear stuff is fascinating to me because European countries have definitely kind of continued with nuclear development, but in the United States, most of that nuclear development and knowledge is still associated with old infrastructure. And so I definitely have been seeing a lot of interesting things about micro or MIDI type of fusion stations and all that. And that's cool.

But it still comes back to like your ESG comment, right? Which is how much are we willing to sacrifice to power this new amazing technology world that AI is going to deliver?

And it certainly has parallels back to the days of supercomputers as well as what's coming up with quantum computer. Quantum computing. I've had, you know, people are always like, well, you know, quantum computing's next.

And I'm like, do you really think you're going to own a quantum processor? Like, are you going to lease cycles off of a quantum processor? Right. Because you will not have a quantum device in your building, pure and simple.

You won't have one in your data center. It'll be interesting to see if even the big three end up with any of them in their data centers based upon how those things operate.

But the cost is massive. The cooling requirements for those are even more restrictive than they are for AI. So we do seem to be really moving into this.

This set of technologies that has, has these massive consumption requirements that aren't being built into these equations. So definitely a side quest on that particular topic.

But I think it's a fascinating one and I think it'll be another one that comes back up in future podcasts because these physical infrastructure realities are not easy to overcome. And you know, with your background rock, we haven't even talked about the state current state of the delivery grid.

Grid, which is a whole nother can of worms. Right. Which is why data centers can only be built in certain locations.

We just can't go build them out in the middle of the desert in Nevada because there's not the, you know, the necessary infrastructure to make it happen.

As we're coming up on time here, I'd be curious, what messages, what type of information would you like to share with the audience about either continuing on their journey that they've already begun with AI, the concerns that they should have an eye out for, maybe possibly do some reading. You mentioned some Great names. You mentioned some great resources. What else would you share with everyone who's listening?

Rock Lambros:

Oh gosh. You know, from a resource perspective, it's resources abound, right? Like YouTube. Go to arXiv. ArXiv is an incredible research depot, if you will.

A ton of great research papers around AI and AI governance there. I do a lot of my research there.

The article, Richard, that you mentioned earlier, I did a lot of my research and formulating my thoughts around what like those Janus systems and distributed ledgers and combining it all together look like, like, you know, lay on top of the fact continuous red teaming against agentic AI systems, which is a lot of stuff people from OWASP were working on as well. So just go out and get information, but validate your information, right? There's a lot of FUD out there. There's a lot of crap out there.

Get it from reputable sources. You know, look at LinkedIn, look at who people are following.

You know, ironically, even though I hate like the, the, you know, there are influencers that don't know what they're talking about, but there is a pretty decent correlation between number of followers looking at the posts and the quality of the post, right? The LinkedIn algorithm is decent at that, right?

And so you could kind of find resources that way versus just, you know, you can find some great stuff randomly searching on YouTube, but you're also going to find a lot of you. It's YouTube, right?

But there's my point being there's a ton of free resources out there, there and you don't have to go get a master's degree in AI to really understand this stuff or embrace the stuff. And then from a security practitioner perspective, embrace it. Embrace it.

Not only in your day to day job, not only leveraging, pick your LLM of choice to be your thought partner, help you brainstorm around things. Not like plagiarized work, right? But. And then embrace finding little use cases within your soc, right?

To maybe do some really pointed analysis based on a threat feed and then expand that use case from there.

Because as we mentioned kind of earlier in the podcast, and this was before AI really hit the scene, security budgets, particularly around headcount, have been reducing over the last few years, not necessarily increasing. Although the need for cybersecurity is greater than ever. So embrace it. Don't be, don't be the naysayer, right?

Be, don't be the, the laggard, the luddite. Embrace it. Because, you know, if you're worried about AI taking their job, it's going to be the people who embrace AI and keep their jobs.

So just dive head, head in. You know you can find me on LinkedIn. I'm the only Rock Lambros. Richard's easy to find on LinkedIn. I'm speaking for Richard.

We're happy to talk and as if you couldn't tell, we both nerd out on the topic. So love excuse me to talk about it.

Richard Bird:

Well, I I would say that I nerd out because I'm just hungry for knowledge. I don't consider myself to be an expert of AI anything.

What I what I do though know about myself is, is that my 30 plus years of experience has touch points that are associated with interesting things that are happening with AI and always within the security realm, but certainly within the creative realm as well. And I know you and I are both I would consider power users of AI technologies for a lot of things that we do. I agree with you.

If there's advice that I would give the audience as we close things out for episode one of season one, it would be be intellectually curious. Go find out.

There's tons of information out there and there's tons of interesting things that you will be opened up to once you begin to see what many of these tools and capabilities can do for you.

Personally, the only thing that I would caution anyone who's listening on is if you're a within the corporate world, don't conflate or confuse your own personal triumphs with the use of AI with an expectation that you will have those same triumphs and ease of use, access and operationalization in the corporate fold. This is going to be challenging work for the next several years. Thanks everyone for listening. I am Richard Byrd.

It has been a pleasure to have Rock Lamberth on with us here today on Yippee Kai AI. Please subscribe Please download if you're traveling.

Listen, keep up with what's going on as we use AI AI out here in the wilderness and learn together on what we need to be concerned about, how we control it, how we govern it, and how we operationalize it. Thanks again everyone. Look forward to hearing or seeing on the next episode.

Yippee Kai AI where the circuits hung, the code is written and the bots do run.

Listen for free

Show artwork for Yippee Ki-AI !

About the Podcast

Yippee Ki-AI !
How is AI being used, and what are the consequences for risk, security, privacy and operations?
Yippee Ki-AI! – The Podcast for AI Adventurers, Skeptics, and Worriers Among Us!

Welcome to Yippee Ki-AI, sponsored by Singulr AI, the podcast that cuts through the hype and dives deep into the real-world impacts of artificial intelligence. Hosted by Richard Bird, a cybersecurity veteran and globally recognized security expert, this show isn’t about how to build AI - it’s about how AI is shaping our world, our businesses, and our lives, and what steps we need to take to ensure security, privacy, and effectively operationalize AI.

Every week, we break down the latest AI news, revealing how each new feature, agent, or service changes the game for organizations – from financial shocks to legal landmines, from cybersecurity headaches to operational resilience. We tackle the questions that most AI discussions avoid:

- What happens when an AI agent goes rogue on your balance sheet?

- How does a new AI feature shake up your legal risk profile?

- Are your partners and contractors quietly introducing AI chaos into your supply chain?

- Can your business withstand the relentless march of algorithmic decision-making?

In this unapologetically blunt and deeply practical podcast, we expose AI's strengths, weaknesses, and downright ugly aspects, pulling no punches as we uncover the hidden risks and unexpected rewards of our AI-powered future.

If you’re a business leader, risk manager, or someone who refuses to be blindsided by the next AI revolution, Yippee Ki-AI! is your no-nonsense guide to staying one step ahead. And if you need to get your lassos around AI security and governance, make sure to visit www.singulr.ai!

So buckle up, lock down your data, and let’s get real about AI – because the future isn’t coming… It’s already here.

New episodes drop bi-weekly. Subscribe now, and never let the machines catch you off guard.

About your host

Profile picture for Richard Bird

Richard Bird

Richard William Bird is the Chief Security Officer for Singulr AI and a six-time C-level executive in the corporate and start-up worlds. His 30-year career journey has been diverse and unique, from a dozen years at JPMorgan Chase to delivering keynote presentations worldwide. Richard is internationally recognized for his expert insights, work, and views on AI security, data privacy, digital consumer rights, API security, and identity security. He is a highly sought-after speaker and moderator who addresses today's security problems with humor and clarity.

Richard recently released his first book, Famous With 12 People: A Career Guide On How to Be an Internationally Recognized Expert In Something Nobody Cares About. It is a practical field guide on personal and professional branding, public speaking, and effective networking.

He is a Senior Fellow with the CyberTheory Zero Trust Institute and an executive member of CyberEdBoard. He has been interviewed and quoted extensively by media outlets, including ISMG, the Wall Street Journal, CNBC, Bloomberg, The Financial Times, Business Insider, CNN, NBC Nightly News, Dark Reading, and TechRepublic.