Attackers don't follow a straight line. They zigzag, backtrack, pivot, and adapt. This talk explores how the MITRE ATT&CK framework maps these crooked paths — and what developers can do to straighten out their defenses. The background illustrates the contrast: the dashed line is the path defenders expect, and the red crooked line is how attacks actually unfold across tactics like reconnaissance, lateral movement, and exfiltration.
Quick intro — I'm Chris, a Principal Software Engineer at Microsoft. I spend a lot of time thinking about how developers can build more secure applications without needing a PhD in cybersecurity.
Here's our roadmap. We'll start with the problem, compare two major frameworks, then dive deep into real attack techniques with code examples. By the end, you'll have practical patterns you can use tomorrow.
The attack surface has exploded. We're not just building monoliths anymore — we have APIs, microservices, serverless, and cloud infrastructure. And attackers don't just try one thing. They chain techniques together in complex kill chains. Our defenses need to evolve beyond "patch and pray."
Most developers know OWASP. It's fantastic for understanding vulnerabilities — what can go wrong in your code. But it's fundamentally a prevention-focused, vulnerability-centric view. It answers "what's broken" but doesn't tell you much about who's attacking or how they actually behave.
MITRE ATT&CK flips the perspective. Instead of cataloging vulnerabilities, it catalogs attacker behavior. It started at Fort Meade when MITRE researchers studied real adversaries on a network. With over 200 techniques mapped from real-world attacks, it's the most comprehensive map of how hackers actually operate.
ATT&CK isn't the only framework from MITRE. D3FEND maps defensive countermeasures, ATLAS covers AI/ML threats, and ENGAGE provides adversary engagement strategies. Together they form a comprehensive ecosystem. But ATT&CK is the foundation — and the most relevant for developers.
Think of it as a hierarchy. Tactics are the goals — "I want to get initial access." Techniques are how — "I'll use spear phishing." Sub-techniques get specific — "I'll send a phishing email with a malicious attachment." And procedures are documented cases where real threat groups actually did this.
This is the kill chain — the crooked line from our title slide. Notice how it's grouped: Pre-Attack for reconnaissance, Get In for initial compromise, Stay In for maintaining access, and Act for achieving objectives. Attackers don't always go linearly — they loop back, skip steps, and adapt.
Side by side, you can see the difference. OWASP says "your SQL query is injectable." ATT&CK says "an attacker will exploit your public-facing application, escalate privileges, move laterally, and exfiltrate data." Both views are essential — one prevents the hole, the other detects the intruder.
The key insight: real-world breaches are never a single vulnerability. They're chains of techniques. SolarWinds was supply chain compromise leading to lateral movement leading to data exfiltration. You need prevention AND detection to handle the full lifecycle.
This mapping is incredibly useful. When you fix an OWASP vulnerability, you're actually blocking specific ATT&CK techniques. Fixing SQL injection doesn't just close a bug — it blocks T1190, which is the front door for dozens of attack chains. Understanding this connection helps you prioritize what to fix first.
Now we shift gears. For the next section, I want you to put on a black hoodie — metaphorically. We're going to look at real code through the eyes of an attacker and then see how to defend it.
This is the core insight of the talk. Defenders build straight-line defenses — firewall, IDS, patch management. But attackers zigzag, loop back, escalate, discover new targets, and escalate again. ATT&CK captures this messy reality that a linear kill chain model misses.
SolarWinds is the poster child for why developers need ATT&CK. The attackers compromised the build system — not the source code — so code reviews missed it entirely. The malicious DLL was signed with SolarWinds' own certificate. 18,000 organizations installed it. It was undetected for 9 months. This is what a real crooked line looks like: supply chain to execution to credential theft to exfiltration, with defense evasion at every step.
This is where every attack begins — getting that first foothold. Whether it's exploiting a web vulnerability, stealing credentials, or phishing, the attacker needs a way in.
These are the most common initial access techniques. T1190 is your classic web app exploit — SQL injection, XSS, etc. T1078 is even scarier — the attacker has real, valid credentials. Brute force and phishing are how they get those credentials in the first place.
Classic SQL injection. The attacker passes "1 OR 1=1--" as the ID, which dumps the entire users table. This is the number one way attackers exploit public-facing applications. Simple string concatenation is all it takes to open the door.
The fix is straightforward — parameterized queries. But notice we also added input validation and consistent error responses. The consistent 404 prevents T1087 account discovery — attackers can't tell which user IDs exist based on different error messages. Defense in depth.
Credential stuffing uses breached password databases to try known username/password pairs at scale. Detection is key here — look for many accounts being tried from the same IP, or unusually rapid login attempts. This is behavioral detection, not vulnerability prevention.
Once attackers get in, they need to execute code. This section covers how they run malicious commands through your application.
Command injection is when user input ends up in an OS command. Exploitation for client execution targets the user's browser or client application. Process injection is more advanced — injecting code into running processes. As developers, we mostly encounter T1059.
This is terrifying. The filename goes directly into a shell command. An attacker sends "file.jpg; rm -rf /" and suddenly your server is wiping itself. Or worse — they install a reverse shell and maintain persistent access. Never concatenate user input into shell commands.
The defended version never uses a shell. We validate the filename, use an allowlist of commands, escape arguments, and call the binary directly with ProcessStartInfo. No shell means no shell injection. Always avoid UseShellExecute when processing user input.
Pickle deserialization is essentially arbitrary code execution disguised as data parsing. The fix is simple — use JSON instead. If you must deserialize complex objects, use schema validation. Never deserialize untrusted data with pickle, YAML's unsafe loader, or Java's ObjectInputStream.
Attackers don't want to re-exploit every time. Once they're in, they want to stay in. This is where persistence techniques come in — and session hijacking is one of the most common web-specific methods.
Account manipulation means creating backdoor accounts or elevating privileges on existing ones. Session hijacking steals active sessions — why crack passwords when you can steal the cookie? Web shells are the scariest — a persistent backdoor file on your server that gives the attacker a command line.
Count the vulnerabilities: hardcoded secret means anyone with source access can forge sessions, no HTTPS means cookies fly in plaintext, no httpOnly means JavaScript can steal them via XSS, and 24-hour expiration gives attackers a huge window. Plus, no session validation or rotation means a stolen session works forever.
The defended version addresses every issue: environment-based secrets, HTTPS-only cookies, httpOnly flag, short 15-minute expiry with rolling refresh, SameSite protection, and session fingerprinting. The fingerprint ties the session to the client's characteristics — if someone steals the cookie but has a different fingerprint, we kill the session.
Web shells are how attackers maintain persistent access to your server. This validator checks both file extensions and content patterns. A file named "profile.jpg" that contains "<?php eval(" is clearly a web shell. Always validate upload content, not just the extension — attackers can double-extend filenames or use polyglot files.
Credentials are the keys to the kingdom. Attackers know that developers often leave secrets lying around in code, config files, and environment variables. Let's look at the wrong way and the right way.
T1552 is huge — hardcoded credentials in source code are found in almost every codebase audit. T1555 targets credential stores like browser password managers. T1528 is about stealing OAuth tokens and API keys from running applications. All three are preventable with proper secrets management.
I see this in code reviews all the time. Passwords in connection strings, API keys in config objects, secrets committed to git. Once a secret hits version control, it's there forever — even if you delete it, it's in the git history. Tools like truffleHog and GitLeaks specifically scan for these patterns.
The right approach: use Managed Identity — your app authenticates to Azure without any credentials in code. RBAC controls who can access what in Key Vault. No API keys, no secrets in config, no rotation headaches. DefaultAzureCredential works locally with your dev credentials and in production with managed identity.
Same pattern in JavaScript with Azure Key Vault and Managed Identity. The key principle across all languages: no secrets in code, no API keys, authenticate with identity not credentials. Works with Azure, AWS IAM Roles, and GCP Workload Identity too.
Don't build your own scanner — use GitHub's built-in secret scanning. It covers 200+ partner patterns and blocks pushes before secrets ever hit version control. For private repos, GitHub Advanced Security adds custom patterns and organization-wide coverage. Combined with pre-commit hooks, you get defense in depth for credential leaks.
This is the sneaky stuff. Once attackers are in, they don't want to be detected. They'll tamper with logs, obfuscate their tools, and masquerade as legitimate processes. If your logging can be manipulated, your incident response is blind.
T1027 is about hiding malicious payloads — encoding, encryption, packing. T1070 is log tampering — deleting or modifying logs to cover tracks. T1036 is masquerading — making malicious files look like legitimate system files. These techniques make forensic investigation extremely difficult.
Log injection is subtle and devastating. The attacker's username contains a newline and a fake log entry. Your log file now shows a successful admin login that never happened — and the real failed attempt is buried. During incident response, investigators will see "Successful login for admin" and miss the attack entirely.
The real solution is: don't own the log storage. Ship structured logs via OpenTelemetry to Azure Monitor Log Analytics or Application Insights. The logs land in an immutable workspace you query with KQL — attackers can't tamper with what they can't reach. Sanitize inputs before logging to prevent injection, and use structured logging so fields aren't interpolated into raw strings.
This architecture ensures that even if an attacker gets root access, they can't silently erase their tracks. The local buffer, encrypted storage, and external SIEM create multiple independent records. Tamper detection compares them — if they disagree, someone modified the logs.
This is the technique that keeps security teams up at night. Why attack your code when they can attack the code you depend on? SolarWinds, Log4Shell, and the event-stream incident all showed how devastating supply chain attacks can be.
T1195 is the broad category — any compromise of something upstream of you. T1195.001 specifically targets software dependencies — the npm packages, PyPI packages, and NuGet packages we all depend on. The average application has hundreds of dependencies, each one a potential attack vector.
The event-stream incident is a masterclass — a maintainer handed off a popular package to a new contributor who added a Bitcoin-stealing payload. Shai Hulud is even scarier: a proof-of-concept npm worm that propagates by hijacking publish tokens and injecting postinstall scripts into every package the compromised developer maintains. It spreads automatically — no social engineering needed after initial infection. Typosquatting creates packages with similar names hoping for typos. Dependency confusion exploits the gap between public and private registries.
This is the most sophisticated supply chain attack in open source history. A state-sponsored actor spent TWO YEARS building trust, contributing legitimate patches, then socially engineering their way to co-maintainer. The backdoor was only in the release tarballs, not the git repo — bypassing all code review. It was caught by sheer luck when Andres Freund noticed SSH performance degradation while debugging something unrelated.
Every ecosystem has tools for this. npm audit, pip-audit, and dotnet list --vulnerable are your first line of defense. Use lockfiles religiously — npm ci uses the exact lockfile, and pip's --require-hashes verifies every package against known checksums. Run these in CI/CD to catch issues before deployment.
Lock files are your first defense — they pin exact versions with cryptographic hashes so no one can swap a package without detection. SBOMs go further: they're a complete inventory of every component in your software. When a new CVE drops, you can instantly answer "are we affected?" by scanning your SBOM instead of auditing source. Tools like Syft, CycloneDX, and Grype make this practical. Run SBOM generation in CI and scan on every build.
This is your supply chain security pipeline. Every dependency goes through integrity checks and vulnerability scanning before it's installed. Even after installation, monitoring continues — because vulnerabilities can be discovered in packages you already use. Automation is key — make this a gate in your CI/CD pipeline.
This is the endgame for many attacks. The attacker has gotten in, escalated privileges, and moved laterally. Now they want the data. How do they collect it, and how do they get it out without being noticed?
T1213 is bulk data harvesting — think SELECT * FROM customers. T1567 uses legitimate cloud services like Dropbox or Google Drive to exfiltrate data, making it hard to distinguish from normal traffic. T1020 automates the process with scripts that systematically extract and transfer data.
This is behavioral analytics in action. We baseline each user's normal data access patterns — how many records they typically access, which tables, what time of day. When someone suddenly accesses 10x their normal volume or touches sensitive tables they've never queried before, the anomaly score spikes and we trigger step-up authentication.
Traditional rate limiting counts requests. Exfiltration-aware rate limiting counts bytes. An attacker might make only 10 API calls, but if each returns 10MB of data, that's 100MB of exfiltration in minutes. By tracking cumulative transfer volume per user per time window, we can detect and block bulk data theft even at low request rates.
Multiple checkpoints in the data flow. Authorization happens first, then anomaly detection checks the pattern, then bulk transfer detection checks the volume, and finally rate limiting checks the frequency. Any checkpoint can block the request and alert the security team. Layered defense for data protection.
Now let's talk about how to actually bring all of this into your development workflow. Theory is great, but what do you do on Monday morning?
This is your threat modeling loop. For every feature, ask: what ATT&CK techniques could target this? Then design detections, implement them, and test. The loop is continuous — as new techniques are added to ATT&CK, revisit your features. This is a shift from reactive patching to proactive defense design.
This table is a cheat sheet. For every feature in your application, you can look up which ATT&CK techniques are relevant. User login maps to credential attacks. File upload maps to web shells. Data export maps to exfiltration. Use this as a starting point for your threat model — customize it for your specific application.
These are the five patterns we've seen throughout this talk. Behavioral analytics baseline normal behavior and flag anomalies. Technique logging uses ATT&CK IDs so your SIEM can correlate across systems. Adaptive controls increase security requirements when risk increases. Honey tokens are traps for attackers. Immutable auditing ensures your investigation data can't be tampered with.
Defense in depth means every layer has its own security controls. Input validation stops injection, authentication verifies identity, authorization controls access, behavioral analytics detects anomalies, threat intelligence provides context, and automated blocking responds in real-time. An attacker has to bypass ALL of these layers.
This is how OWASP and ATT&CK work together in practice. OWASP gives you secure coding practices, vulnerability testing, and security reviews. ATT&CK adds behavioral monitoring, technique correlation, and threat hunting. Together, you get: Secure by Design, Monitor by Behavior, and Respond by Intelligence.
For every tool and practice you already use, there's both an OWASP and ATT&CK angle. Your SAST scanner finds vulnerabilities (OWASP) and also identifies code patterns that enable specific techniques (ATT&CK). Your penetration tests can exploit vulnerabilities AND simulate real adversary technique chains. Leverage what you already have.
Don't try to boil the ocean. Phase 1 is mapping and logging — understand what you're defending and make sure you can see what's happening. Phase 2 adds active detection and automated response. Phase 3 adds advanced capabilities like deception and threat intelligence. Each phase builds on the last.
Security culture is as important as security code. Train your team on ATT&CK, include technique IDs in your Jira tickets, use the ATT&CK Navigator for visual coverage maps, and encourage "red team thinking" in design reviews. Ask: "If I were an attacker, how would I abuse this feature?"
The ATT&CK Navigator is a free, interactive tool for visualizing your coverage. Color-code techniques green for defended, yellow for partially covered, and red for gaps. It's incredibly powerful for communicating with stakeholders — a visual heat map of your security posture is worth a thousand bullet points.
Don't be overwhelmed by 200+ techniques. Start with these three: authentication monitoring catches credential abuse, session security prevents hijacking, and data access anomalies catch collection and exfiltration. These three techniques appear in almost every major breach. Master these, then expand.
This is a joke but also very real. Every security TODO in your backlog is a technique an attacker can exploit. The difference between the left and right columns is just time. ATT&CK helps you prioritize which TODOs to fix first based on real adversary behavior.
Let's wrap up with the key messages I want you to take away from this talk.
If you remember nothing else: OWASP and ATT&CK are complementary, not competing. Think like an attacker to build better defenses. Detection is a developer responsibility, not just ops. Tag your security events with ATT&CK IDs. Use behavioral analytics. And start with three techniques — don't try to cover everything at once.
Here are resources to continue your journey. The ATT&CK framework site and Navigator are your primary tools. D3FEND is MITRE's companion project that maps defensive countermeasures to techniques. And please reach out — I love talking about this stuff.
Thank you! I'm happy to take questions. If we run out of time, catch me in the hallway or reach out on BlueSky or LinkedIn.