This blog entry was originally written by Bob Rudis (@hrbrmstr), I am just migrating the post to the new SIRA site.
All the following newly-minted risk assessment types have been inspired by actual situations. Hopefully you get to stick to just the proper OCTAVE/FAIR/NIST/etc. ones where you practice.
- HARA :: Half-Assed Risk Assessment — When you are not provided any semblance of potential impact data and a woefully incomplete list of assets, but are still expected to return a valid risk rating.
CRA :: Cardassian Risk Assessment — When you are provided the resultant risk rating prior to beginning your risk assessment. (It's a Star Trek reference for those with actual lives)
"We're going to do x anyway because we don't believe it's a high risk, but go ahead and do your assessment since the Policy mandates that you do one."
- IRA :: Immediate Risk Assessment — This one has been showcased well by our own Mr. DBIR himself on the SIRA podcasts. A risk assessment question by a senior executive who wants an answer *now* (dammit)! It is often phrased as "Which is more secure, x or y?" or "We need to do z. What's the worst that can happen?". You literally have no time to research and - if you don't know the answer - then "Security" must not be very smart.
- IRAVA :: In Reality, A Vulnerability Assessment — When you're asked to determine risk when what they are *really* asking for what the vulnerabilities are in a particular system/app. Think Tenable/Qualys scan results vs FAIR or OCTAVE.
- IOCAL :: I Only Care About Likelihood — This is when the requester is absolutely fixated on likelihood and believes wholeheartedly that a low likelihood immediately means low risk. Any answer you give is also followed up with "have we ever had anything like x happen in the past?" and/or "have our competitors been hit with y yet?"
- AD3RA :: Architecture Design Document Disguised As A Risk Assessment — When you are given all (and decent) inputs necessary to complete a pretty comprehensive risk assessment but are then asked to include a full architecture design document on how to mitigate them all. The sad truth is, the project team couldn't get the enterprise architects (EA) to the table for the first project commit stage, but since you know enough about the technologies in play to fix the major problems, why not just make you do the EA dept's job while you are just wasting time cranking out the mandatory risk assessment.
- WDRA :: Wikipedia Deflected Risk Assessment — When you perform a risk assessment, but a manager or senior manager finds data on Wikipedia that they use to negate your findings. (Since - as we all know - Wikipedia is the sum of all correct human knowledge).
If you are also coerced into performing an insane risk assessment that doesn't fit these models, feel free to share them in the comments.