0:00

To give you one real example of how risk management

Â has been incorporated into the requirements lifecycle,

Â we're going to look at NASA's Defect Detection Prevention algorithm.

Â This is also known as the DDP.

Â The approach is aimed to systemize the identity,

Â assets and control cycle to integrate

Â risk management into the requirements engineering process.

Â This was developed by NASA in 2003 and it includes a quantitative,

Â reasoning tool with visualization facilities as well.

Â This technique also handles multiple risks in

Â parallel and explicitly considers risk consequences.

Â This whole technique could be simplified,

Â but in general it consists of three basic steps.

Â In the first step, you elaborate a risk impact matrix.

Â Then, you elaborate a countermeasure effectiveness metric.

Â Lastly, you determine an optimal balance of risk reduction versus countermeasure cost.

Â Then, you rinse and repeat as you go through.

Â In the first part of DDP or Defect Detection Prevention,

Â we build a risk consequence table with domain experts.

Â This table is capturing the estimated severity of all the consequences of each risk.

Â For each objective and associated risk,

Â the table is specifying an estimated loss of proportion of attainment of the objective,

Â if that risk occurs.

Â If there is no loss,

Â then you put a zero in that column.

Â A one, indicates total loss.

Â And then you can have anything scaling between zero and one for everything else.

Â In step two, for each pair of countermeasures and weighted risk,

Â specify an estimation of the fractional reduction of

Â the risk if the countermeasure is applied.

Â If there is no reduction, you put a zero.

Â If you are totally reducing the risk,

Â totally eliminating the risk, then you put a one.

Â By looking at each row,

Â you can then calculate the overall effect of applying the set of

Â countermeasures to the counteractive risks.

Â Each countermeasure has a benefit in terms of risk reduction.

Â But countermeasures also have some cost associated with

Â them and maybe in addition of cost or maybe a decrease in cost.

Â So in step three,

Â we need to estimate costs with domain experts.

Â The DDP tool may then visualize the effectiveness of

Â each countermeasure together with its cost.

Â The Defect Detection Prevention process can now be used to visualize,

Â actually quite a few things.

Â A risk based balance chart can show the residual impact of each risk on all objectives,

Â if some particular corresponding countermeasure is selected.

Â From this chart, we can then explore optimal combinations of

Â countermeasures where we're trying to achieve risk balance,

Â with respect to the cost constraints.

Â The optimal combinations equate to a 0_1 knapsack problem,

Â where you're trying to balance your risk and your cost.

Â If you aren't familiar with the 0_1 knapsack problem,

Â let's say that, you're going to school or you're going to

Â work and you need to fill your bag with everything that you need.

Â Take a second and think,

Â what would you put in your bag?

Â In what order would you put it in your bag?

Â And why?

Â Now many of us would start with putting in a laptop,

Â putting in a notebook,

Â maybe a laptop or a phone charger etc.

Â Given that I live in Colorado where the air is very dry, next,

Â I would put in a bottle of water,

Â oh, and a sandwich.

Â I almost forgot my sandwich.

Â I don't want to go through the work day without my sandwich or pizza.

Â We'll see. Anyway, what would you put?

Â Each item that you put into your knapsack takes up space and it has some value to you,

Â where all of those values are different.

Â Here, we're trying to create our product,

Â our knapsack with the amount of risk and cost in mind.

Â The risk and the cost are fitting into our constraints.

Â We want something as functional as possible,

Â as, well fitting in the constraints.

Â By the way, if you aren't really familiar with the knapsack problem,

Â know that there are also partial knapsack problems.

Â In the partial knapsack, in those cases,

Â that means that you could cut your laptop in half and put half of it in.

Â Obviously, this makes much more sense in terms

Â of adding and deleting functionality of a program.

Â But know that those algorithms are out there.

Â The 0_1 knapsack problem is N_P complete.

Â The knapsack problem is interesting from the perspective

Â of computer science for many, many reasons.

Â There are two different problems.

Â The decision problem and the optimization problem.

Â The decision problem form of the knapsack problem, which means,

Â can a value of at least the v be achieved without extending some particular weight?

Â That problem is N_P complete.

Â That means that there is no known algorithm that is both correct and fast on all cases.

Â And by fast we mean,

Â that there's not an algorithm that can be run in polynomial time.

Â While the decision problem is N_P complete,

Â the optimization problem is N_P hard.

Â Its resolution is at least as difficult as the decision problem.

Â And there is no known polynomial algorithm,

Â which can tell, given some solution whether or not it's optimal.

Â Since we do not have perfect solutions here,

Â we instead use approximation algorithms.

Â And these are appropriate for approaching any kind of

Â problem involving N, 0_1 knapsack issues.

Â Main ways of doing this are through

Â dynamic programming and also through particular machine learning algorithms.

Â I encourage you to read up more on this if you've never heard of any of this before.

Â In DDP, we use

Â a simulated annealing optimization procedure to find the near optimal solutions.

Â For example, we try to maximize satisfaction of

Â objectives under some particular cost threshold,

Â or we could minimise cost above the satisfaction threshold.

Â The optimality criterion can be set by the requirements and engineer or by the developer.

Â But it's something that you really should be discussing

Â with your project experts as well.

Â For more information about DDP,

Â check out the reading by Feather and Kornfield from 2003.

Â It's called, Quantitative Risk Based Requirement Reasonings

Â and is included in one of our readings.

Â