The basic probability theory is a tool that is used in decision making and risk management to solve difficult analytical problems by use of difficult integrals (Durrett, 2010). The results depend on random occurrence of infinite or finite events based on events that occur in a given probability space. Outcomes depend on heavy tailed distributions that are used to predict and analyse risks using the probabilistic behavior of finite random or discrete variables that are symmetrical about zero.
specifically for you
for only $16.05 $11/page
The probability theory is used to mathematically model risk management problems to avoid the difficult methods of solving the problems using routine analytical mathematical methods (Durrett, 2010). Solutions are obtained by classifying risks into different categories based on the objective risk (for objective probability) and subjective risk (for subjective probability) (Durrett, 2010).
Methods and tools for qualitative risk analysis
The stochastic method of risk analysis uses the concept of random loads and resistance that are achieved by integrating risks under the dominion of the load and resistance values. However, the problem with the method is that one has to determine the unknown probability density function to use before doing any calculations. The fuzzy set method uses quantifiable variables to determine the lower and upper parameters when mathematically modelling a problem. The weakness with the fuzzy method is that risk cannot be measured reliably because the variables must be determined before the outcomes are calculated.
A study by Gibson (2011) shows that qualitative risk analysis can be conducted by using historical data, risk rating scales, Delphi methods, brainstorming tools. The weakness with those tools is that it is difficult to quantify and evaluate the effect of risks on the costs of resources, scope, project performance, schedule, budget, and project deliverables. It is difficult to develop the right scales to accurately assign probabilities to events to accurately rate the effect of the risks.
Development and contingency allowances
The contingency models that are used to assess systematic risks can be developed by using the practices of information system users and the processes that occur within the information systems that are used to generate data for specific project risks that are not amenable to regression analysis (Gibson, 2011). To compensate for the risks and uncertainties that occur, it is important to determine the contingencies from previous projects by comparing actual vs. estimated costs.
The resulting data is plotted on a graph of risk ratings when estimates were done against the percentage overruns of estimates on a graph and corrections are then made to cover the changes in the project scope. The values on the graph of the cost of growth and project definition are evaluated to determine contingency allowances that can be provided (Gibson, 2011).
Triggers for monitoring potential risks
According to Gibson (2011), the master logic diagrams are used to monitor risk occurrences by categorizing undesirable and detailed event descriptions on a diagram. Here, the event sequence diagrams and event trees provide flow paths for charting risk. In addition, the integrated scenario model can be quantified to provide sets of pivotal events for monitoring and predicting risk that include the propagation of epistemic uncertainties. According to Gibson (2011), the elapsed time method is used to determine the effects of inoperable catastrophic risks, relative variance is determined by use of the chi-square test methods depending on the category of risks, and the threshold valuation method is analysed by use of variance-covariance techniques.
100% original paper
on any topic
done in as little as
Durrett, R. (2010). Probability: theory and examples. Cambridge university press.
Gibson, D. (2011). Managing risk in information systems. Sudbury, MA: Jones & Bartlett Learning.