The Dark Side of AI

3–4 minutes

Lots of people are scared of AI — and understandably so. AI is being blamed for rising RAM and computer prices, and it looms large behind the unusually high number of tech layoffs we’re seeing right now. While some jobs are genuinely at risk, many remain out of AI’s reach — doctors, lawyers, tradespeople, construction workers, and food service workers among them. But as much as people worry about job displacement, there’s another danger I haven’t seen nearly enough conversation about.

In my day job providing technical support for a software company, I hear constantly from real-world end users. One of the biggest issues they face is online scams — ransomware attacks, phishing emails, and identity theft. What I haven’t seen discussed enough is how AI is actively being weaponized to make these scams more convincing and more dangerous. So today, I want to talk about that. How can we protect the older generation? How do we keep them informed — and prevent more people from falling victim?

Last October, I led a cybersecurity course for older members of a local church I attend. We covered the dangers of AI and practical tips for staying safe online. Nearly six months later, I still hear from attendees about how it changed their habits — how they now check the sender on important emails, look for red flags, and delete suspicious messages without a second thought.

Cybersecurity doesn’t have to be complicated. Once you know what to look for, it becomes second nature. The problem is that many people simply aren’t aware of what AI is capable of. Spend an afternoon scrolling through Facebook and you’ll see what I mean — people regularly engage with AI-generated images as if they’re real. On the surface, it seems harmless: someone shares an image of Jesus rescuing a baby from a car fire and people treat it as fact. But that same person could just as easily fall for an AI-generated voice clip impersonating a loved one, asking them to send gift cards to a “long-lost family member.” That’s where it gets truly dangerous.

I’m not anti-AI. I use it constantly — it’s a fantastic tool that has genuinely made my work easier, and I believe it has enormous potential for good. But that potential cuts both ways, and older adults in particular need to be aware of how AI can be used against them.

The number one takeaway from this post: please talk to your elderly family members. I take calls from people struggling with technology every day, and it’s heartbreaking. Some trust everyone they speak with on the phone; others have swung to the opposite extreme, becoming suspicious of anyone with a non-American accent — which creates its own troubling problem.

As the cost of living rises, more people are turning to scams just to get by. It’s a troubling shift in mindset — from “how can I contribute?” to “how can I take?” It’s easier to spin up a scam website and blast out phishing emails than to build something of value. After my cybersecurity class, several attendees reached out asking for help with their devices and concerns. I always say yes — not for the money, but because I genuinely love helping people.

It infuriates me to see AI used as a tool to exploit vulnerable people. We have access to one of the most powerful technologies ever created, largely for free — let’s use it to build things, not steal from people. Start a business. Build an app. Write a blog. Do something that adds to the world instead of taking from it.

If you have any questions or are interested in IT services, feel free to reach out at contact@starman.tech.

Leave a Reply

Your email address will not be published. Required fields are marked *