The Cyber Awareness Challenge 2025 really opened my eyes. I expected the usual refresher on phishing and strong passwords, but instead the course zeroed in on emerging threats and human factors I had underestimated. For example, it highlighted how insiders and careless habits are still a huge problem. One report found 83% of organizations saw at least one insider attack last year.
Equally startling, IBM notes that vishing scams exploded by 442% in 2024, and ChatGPT-era phishing surged thousands of percent. That’s why the training’s heavy focus on voice and multi-channel scams, deepfakes, and even stress and overconfidence caught me off guard – these aren’t topics I’ve seen on a test before. In short, the core surprise was learning just how advanced and varied today’s attacks are, and how traditional security awareness is scrambling to catch up.
Q/A for Cyber Awareness Challenge 2025
Question | Correct Answer |
How can you protect data on a mobile device? | By using two-factor authentication (so a stolen phone alone isn’t enough to access sensitive data). |
How can you prevent viruses and malware? | By scanning all email attachments with approved antivirus software before opening them. |
John receives an urgent email about a social media site shutdown. What should he NOT do? | He should not forward the email or its attachments to colleagues – that spreads potential malware. |
Matt, a government employee, needs to share a confidential contract document with his boss. What should he do? | Encrypt the file and send it via his digitally signed DoD email (using approved crypto tools). |
How should you handle a new “friend request” on a work social account? | Validate the request through a second channel (for example, call the person to confirm) before accepting. |
Underestimated Cyber Threats in 2025
Insider Threats and Negligent Behavior
Even seasoned IT folks often forget that the biggest risk can be “inside” the network. According to IBM’s report, a whopping 83% of organizations had at least one insider attack in the past year. Crucially, IBM emphasizes that not all insider threats are malicious – many stem from untrained or careless users who simply aren’t aware of the dangers. The 2025 Challenge section on insider threats drilled this point home: it showed scenarios where an employee’s simple mistake or unchecked privilege led to a breach. This was a wake-up call that neglect and complacency – like sharing passwords or ignoring policy – can be just as damaging as a malicious hack.
Vishing, Smishing, and Multi-Channel Scams
The training surprised me by elevating phone and text scams to equal billing with email phishing. Threat intelligence confirms why: cybersecurity researchers report that vishing (voice phishing) jumped over 442% in 2024, and nearly 40% of phishing campaigns now use channels beyond email (like SMS, social media, or collaboration tools). In other words, attackers are blending email, phone calls, texts, and chat apps to trick people. The Challenge’s scenarios – e.g. “Your boss calls urgently on a burned line” – underscore that we need to watch out for scams on any channel. It reminded me that any communication could be faked if we let down our guard.
Physical Security and Removable Media Attacks
Despite all the focus on code and cloud, the Challenge reminded me not to ignore physical vectors. For instance, even in 2025 USB sticks remain a nightmare: a recent report found that 51% of malware attacks in industrial environments were delivered via USB, six times higher than five years ago. The Challenge showed cases like “someone found a flash drive in the parking lot” to highlight how easily malware can spread. It drove home that tailgating into secure areas, picking up unknown devices, or leaving hard drives unattended are still frontline threats. These are surprisingly common in real breaches, and the training’s emphasis was a good reminder.
Credential Theft & Password Exploits
Another area I had underestimated is credential-based attacks. We tend to think complex tech will save us, but in reality stolen passwords still fuel most breaches. One study found 24% of breach incidents started with stolen or abused credentials. And many people still reuse weak passwords – for example, 85% of users admit they reuse at least one password across sites. The Challenge included realistic quizzes like spotting fake login pages and advice on password managers. It underscored that even in 2025, threats like credential stuffing and password spraying are alive and well, especially now that many work accounts live in the cloud (the training noted that ~80% of phishing tries to steal cloud credentials). Seeing these stats in context was a bit of a slap in the face: we need to lock down accounts with MFA and good habits, always.

Supply Chain and Third-Party Exploits
Finally, the course spotlighted supply chain risk, which I’d tended to think of as a niche problem. But the numbers are eye-opening: between 2021 and 2023 supply-chain cyberattacks spiked over 431%, and in 2024 about 75% of companies reported experiencing an attack that originated in an unmonitored vendor system. The Challenge gave examples (like malicious open-source code or cloud providers) to show how a single weak link can compromise many organizations. That section surprised me by treating third-party security as everyone’s problem. It drove home that we all need to ask tough questions about our partners and keep an eye on the ecosystem, not just our own network.
Evolving Human Factors in Security Awareness
Remote and Hybrid Work Vulnerabilities
Working from home (or anywhere) is great – until it becomes a security blind spot. The training highlighted how distributed work has drastically widened the attack surface. For example, Stanford found that about 42% of the workforce now works offsite at least one day a week (up fivefold since 2019). That means employees log in from home routers, cafés, and personal devices. These setups often lack the corporate defenses of an office network, making phishing or malware far easier. The Challenge scenarios included things like an insecure home Wi-Fi or a personal laptop being used for sensitive tasks. This hit home for me – our team has seen more weak password recovery calls and VPN alerts now – and it underscores that remote work must be paired with strong security policies and training tailored to home environments.
Alert Fatigue and Information Overload
The course also touched on something more subtle: people simply get overwhelmed. Between constant news of breaches, endless warning emails, and nonstop alerts, employees (and even IT teams) suffer alert fatigue. Studies show 63% of cyber teams spend over four hours a week dealing with false alarms, and a third of companies say they even delayed real incident response because they were chasing phantoms. On the user side, we are hit with so many “security updates” that it’s easy to just click through. The Challenge reminded me that less is more when it comes to warnings. We need clear, concise guidance – too many pop-ups or mock emails can make people tune out. In essence, employees need meaningful alerts and training, not a firehose of repetitive messages.
Overconfidence in Technology (Automation Bias)
A similar trap is assuming “the computer will catch it.” The Challenge scenario where an employee blindly trusts the spam filter was sobering: even the best tech doesn’t stop 100% of threats. Attackers count on this overconfidence. For example, some people might think “my antivirus or AI spam filter will protect me, so this weird email must be safe.” But IBM experts warn that AI-generated phishing can slip past those defenses because the language looks flawless. In short, the training stressed that no piece of software replaces good judgment. We should double-check even if the machine didn’t flag something, especially as “deepfake” phishing can appear more polished than ever.
Work-Life Blur and Personal Device Use
Another human factor is how personal life and work have blurred. Many of us use our phones for everything – yet those devices often house both sensitive company info and apps like social media or shopping. In fact, a recent survey found 90% of employees mix personal and work devices at some point. The Challenge illustrated what happens when a personal email or app gets compromised and then reaches corporate data. It warned about simple slips, like clicking a link on a tablet that also has work email. This reminded me that BYOD (bring-your-own-device) culture requires vigilance: even private profiles or public Wi‑Fi can introduce risks to work systems. The message was clear: treat personal tech as carefully as office tech, and use company-provided tools whenever possible.
Human Stress and Social Engineering
Finally, the training reminded me that life stress makes employees vulnerable. Under pressure or distraction, people are more likely to snap under a fake “emergency” email or phone call. The scenarios illustrated things like urgent boss requests or family emergencies, playing on fear and compassion. I realized that in high-stress periods (crazy deadline day, pandemic fatigue, etc.) I’m more susceptible to hurried decisions. The course emphasized that attackers exploit exactly this – they know that “I’m swamped” is the moment someone might click without thinking. In short, it’s a powerful reminder that cybersecurity isn’t just technical – it’s psychological. We need to build awareness that when stress is high, taking an extra second to verify is even more important.
Psychological Engineering: Sophisticated Social Tactics
Deepfake Impersonations
One of the most chilling parts of the Challenge was a case study on deepfakes. This year’s training showed how real-looking video or audio can be weaponized. For example, it mentioned an actual incident where attackers used AI-generated video of a company’s CFO to order a fraudulent $25 million wire transfer. Another recent example was when a cloud startup CEO’s voice was cloned to phish dozens of employees. These stories brought home that you cannot trust what you see or hear online. Training now urges us to double-verify urgent requests through a known channel (e.g. text the boss’s verified number) and be skeptical if something seems “too good” or out of the ordinary.
Business Email Compromise (BEC) 2.0 – The Long Con
The Challenge also expanded on the classic CEO fraud and fake invoice schemes. In fact, over 60% of businesses reported encountering BEC scams in 2024, often losing tens or hundreds of thousands of dollars. The 2025 scenarios illustrated how modern BEC isn’t just one quick email – attackers may be in contact with finance staff over days or weeks, gathering info. For instance, IBM noted that most phishing attacks (including BEC) use emotional or urgent language to push victims. The training advised caution even with familiar-seeming messages: question any request to move money or sensitive data, especially if it sounds rushed. In short, BEC is increasingly a patient, multi-step “con” that social engineers use, and the Challenge drove that point home in every module.
Exploiting Authority and Urgency
Following IBM’s guidance, the training listed common social-engineering tropes. We saw examples like “I’m your boss in an emergency – don’t ask questions” or “Your account is suspended – click this link to fix it”. These are designed to short-circuit our skepticism by invoking authority (the boss, IT, police) or fear of urgent consequences. The program’s advice was to always pause when someone claims you must act right now. As IBM analysts point out, legitimate workplace requests usually come calmly and politely – a real manager wouldn’t shout down the phone. The Challenge drills encouraged us to spot unusual tone or pressure, and to verify authority independently. I realized many of us, in our day-to-day, don’t stop to question an urgent email from a “supervisor,” but the training warns that’s exactly what attackers bank on.
Blended Online-Offline Social Engineering
Lastly, the course showed that attacks can mix online and offline vectors. One scenario had an attacker lookup a manager’s info on LinkedIn and then slip a phony USB drive into an employee’s car, later pretending to be IT on the phone. Another combined fake reviews with a live chat bot. These blended tactics highlight that criminals aren’t limited to email or phone – they’ll exploit any channel. The training’s key takeaway: if something odd happens in real life (like a stranger asking weird questions, or an unexplained file on your desk), it could be part of a digital scam. This was a nice reminder that social engineering is called that for a reason – it often spans social settings as much as cyberspace.
Overlooked Training Gaps in 2025
Despite all this, the Challenge also made it clear where training still falls short. The most striking stat I saw was from a study at UC San Diego: employees who had just completed their annual training did no better at spotting phishing in simulations than those who hadn’t trained in over a yearcomputerscience.uchicago.edu. In other words, traditional annual courses aren’t enough. The 2025 Challenge implicitly acknowledges this by putting new content into practice (e.g. interactive quizzes on deepfakes and voice scams), but I noticed a few gaps:
- Checklist vs. Context: Many programs still use a slide-deck of general tips (e.g. “don’t click unknown links”) rather than realistic drills. The study abovecomputerscience.uchicago.edu suggests this approach doesn’t stick. Effective training needs real phishing simulations and scenario-based exercises, not just bullet points.
- Emerging Threats: Some topics – like AI-generated spear-phishing or blended physical attacks – barely had any precedent in older training. I was surprised to see them included in the 2025 Challenge. Until now, most courses didn’t mention deepfakes, “next-generation” vishing, or social media reconnaissance. These new sections highlight that curricula have lagged behind the actual threat landscape.
- Repetition and Reinforcement: Security awareness used to be “one hour once a year.” Expert advice (and the Challenge itself suggests) is that tips should be refreshed constantly. The program’s tone hinted that we need ongoing mini-trainings or reminders about the latest scams. In practice, though, organizations often stop at that one annual session – a gap that 2025’s content itself implicitly tries to address.
In short, the Challenge underscored that simply having a required training module isn’t enough. It needs to evolve year by year, covering fresh tactics and really engaging people.
AI-Driven Social Engineering: The New Frontier
Attackers are increasingly leveraging AI to supercharge everything above. The training touched on this in several places, and outside sources show why it matters. For example, one analysis noted that an AI model can write a convincing phishing email in 5 minutes, whereas a human team might take 16 hours. This means phishing at scale becomes trivial: indeed, total phishing volume has already skyrocketed since the advent of ChatGPT.

Automated, Polished Phishing at Scale
With AI-driven developer tools, attackers can craft personalized, well-worded emails quickly. During one training scenario I realized how fast this could happen: an AI could scrape a target’s social media profile to mention a friend’s name or company detail, then draft a “trusted” email on the fly. Reports indicate that over 80% of phishing campaigns now aim to steal credentials (often cloud logins), and an increasing number of those are AI-assisted. In practice, this means every user should be extra skeptical, because the language won’t show the usual typos or awkwardness. A recent IBM insight reminds us that “bad grammar” is no longer a reliable red flag.
Deepfake Videos and Real-Time Audio Cloning
AI doesn’t just write text — it generates media. During the Challenge, hearing examples of deepfake scams was downright eerie. We know of real incidents: over $25 million lost in a deepfake-video heist, employees fooled by cloned voice messages, etc. The training emphasized that if you get any unusual video call or audio request, it’s worth verifying. Future AI tools can even do this in real time (imagine someone live-translating and faking a CEO’s voice on a Zoom call). The takeaway: don’t trust an audio or video request at face value, no matter how convincing.
AI-Powered Personalization and Reconnaissance
Another layer is that AI can help attackers gather info and plan attacks ahead of time. One IBM security researcher pointed out that a quick web search (or AI agent) can reveal details like your company’s tech vendors or org chart from public sources. The Challenge’s advice to limit what personal info we share ties into this: the less an attacker knows about our background, the less tailored their scam can be. Moving forward, we’ll likely see adversaries using AI to automatically mine LinkedIn, press releases, and even social networks to craft highly relevant lures. Training is starting to address this by reminding us not to overshare online.
Malicious Chatbots and AI Assistants
We even saw a nod to fake chatbot attacks. For example, an AI chatbot could pose as a help-desk assistant and ask for login tokens. The 2025 scenarios briefly introduced the idea that attackers might deploy malicious chat interfaces. It’s early days, but the risk is there. Just like we tell users not to enter credentials into strange websites, we’ll need to teach caution around interactive AI agents in the office.
Defensive AI vs. Offensive AI – Staying Ahead
Finally, the Challenge hinted at the arms race between defender AI and attacker AI. On the defensive side, companies are increasingly using AI to detect unusual behavior or filter phishing. On the flip side, attackers use AI to evade those defenses. The material suggested that we stay adaptable – for instance, never assume a filter is infallible, and use multi-factor authentication so that even a stolen password generated by AI can’t be immediately misused. In conclusion, AI amplifies social engineering, but training and tools must adapt in kind.
FAQ for Cyber Awareness Challenge 2025
Q: What is the DoD Cyber Awareness Challenge, and who has to take it?
A: It’s a mandatory annual cybersecurity training required by DoD policy for all personnel (military, civilian, and contractors) with access to DoD IT systems. Its purpose is to refresh everyone on policies, threats, and best practices (as outlined in DoDI 8500.01).
Q: What’s new or different in the 2025 version of the Challenge?
A: The 2025 update added more content on modern threats: AI-driven scams, deepfakes, multi-channel attacks (smishing/vishing), and nuanced phishing scenarios. It also put extra emphasis on human factors (stress, remote work) and real-world case studies (like high-profile scams).
Q: If I’m confused during the quiz, can I ask someone or look it up?
A: Officially, you must complete the training and quiz on your own, but if you find a concept unclear (like what counts as “two-factor” or “insider threat”), you can review the provided study materials or ask a security officer. The point is understanding, not just passing.
Q: How often do I have to do this, and what if I fail?
A: By DoD rules, everyone must renew their Cyber Awareness Challenge every year. If you fail the quiz, you typically have to review the material and retake it until you pass. The key is to actually absorb the info, since the training is meant to keep you sharp on evolving threats.
Q: Where can I report a phishing attempt or suspicious activity?
A: The training reviewed reporting processes. Usually you should forward any suspicious email to your organization’s help desk or cybersecurity office. If it’s a DoD email system, use the designated reporting address (often something like report@yourorg.mil), or use the “Report Phishing” button if one is provided in Outlook. Always err on the side of caution.
Contact Us
Have questions about this post or cybersecurity in general? You can reach the Programming Insider security team at security@programminginsider.org or visit our Contact page. We’re happy to provide more resources or help clarify any of the topics above.
Conclusion
Taking the 2025 Cyber Awareness Challenge turned out to be a much more enlightening experience than I expected. It highlighted how the threat landscape keeps changing – with AI, deepfakes, and blended attacks – and that our defenses (and training) must change too. The biggest surprise was realizing that many “old” lessons (strong passwords, cautious clicking) still matter, but they must now be paired with new awareness about phone scams, insider behavior, and emotional manipulation. In my view, the key takeaway is that security awareness isn’t a static checklist; it’s an ongoing human skill. And as experts show (with hard statistics), neglecting any piece of it – whether by user error, outdated habits, or ignoring new tech – can be costlycomputerscience.uchicago.edu. By sharing these insights, I hope readers stay vigilant and keep learning. The enemy is innovating rapidly; our best response is a well-informed workforce backed by adaptive security practices.