We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Risk Analysis and Decision Making Essay Sample

essay
  • Pages: 10
  • Word count: 2,739
  • Rewriting Possibility: 99% (excellent)
  • Category: Decision making risk

Get Full Essay

Get access to this section to get all help you need with your essay and educational issues.

Get Access

Introduction of TOPIC

Q1: Part A

Risk administration is the method of recognising and proactively answering to project risks. Generally (but not always) you will gaze for modes to eradicate dangers or to minimize the influence of a risk if it occurs.

A risk contingency budget can be established to arrange in accelerate for the likelihood that some dangers will not be organised successfully. The risk contingency budget will comprise capital that can be tapped in order that your project doesn’t proceed over budget.

Part B:

We will require two figures for each risk:

P–probability that the risk will occur.

I–the influence to the project if the risk occurs. (This can be broken down farther into the cost influence and the agenda influence, but let’s just address a cost contingency budget for now.)

If you use this method for all of your dangers, you can inquire for a risk contingency budget to cover the influence to your project if one or more of the dangers occur. For demonstration, let’s state that you have recognised six dangers to your project, as follows:

Risk

P (Risk Probability)

I (Cost Impact)

Risk Contingency

A

.8

$10,000

$8,000

B

.3

$30,000

$9,000

C

.5

$8,000

$4,000

D

.10

$40,000

$4,000

E

.3

$20,000

$6,000

F

.25

$10,000

$2,500

Total

$118,000

$33,500

Based on the identification of these six dangers, the promise influence to your project is $118,000. However, you can’t inquire for that grade of risk contingency budget. The only cause you would require that much money is if every risk occurred. Remember that the target of risk administration is to confirm that the dangers manage not influence your project. Therefore, you would anticipate that you will be adept to effectively organise most, if not all, of these risks. The risk contingency budget should contemplate the promise influence of the risk as well as the prospect that the risk will occur. This is echoed in the last column.

Notice the total contingency demand for this project is $33,500, which could be supplemented to your budget as risk contingency. If risk C and F really appeared, you would be adept to tap the contingency budget for relief. However, you glimpse that if risk D really appeared, the risk contingency budget still might not be sufficient to defend you from the impact. Risk D only has a 10% possibility of happening, so the project group should actually aim on this risk to confirm that it is organised successfully. Even if it will not be completely organised, confidently its influence on the project will be lessoned through proactive risk management.

Part C:

The risk contingency budget works well when there are several dangers involved. The more dangers the group recognises, the more the general budget risk is disperse out between the risks. The EVM method presents an equation for working out the right allowance of budget to request to the risk contingency budget.

After drawing the conclusion three (which can be discovered in the appendices), the next outcomes have been got for the Expected worth of each decision. The emphasised note for each choice is the optimal anticipated worth of each option.

Nodes

OPTION 1 (EMV)

OPTION 2 (EMV)

OPTION 3 (EMV)

A

2,525,000

1,710,000

925,000

B

625,000

1,410,000

1,225,000

C

2,425,000

1,560,000

1,515,000

D

525,000

1,260,000

1,178,500

E

2,575,000

1,650,000

1,178,500

F

375,000

1,500,000

G

2,025,000

1,572,000

H

125,000

1,572,000

I

2,050,000

J

1,950,000

K

1,800,000

L

1,550,000

M

2,050,000

N

1,950,000

O

1,800,000

P

1,550,000

Q

1,390,000

R

1,742,500

S

1,742,500

After discerning the table and the conclusion trees next have been found

Part D:

* Option 1 devotes the large-scale EMV 1,742,500 compared to choice 2 1,572,000 and choice 3 1,178,500 therefore it is advisable for the firm to take that conclusion (But it is furthermore risky), furthermore when producing the conclusion if to contend for sub agreement or to slash charges before construction any dwellings it appears that sub agreements 1,742,500 are the best choice in evaluation to construction houses 1,390,000.

* Option 3 is the smallest attractive alternative as it devotes the smallest allowance of payout and when looking at the distinct likely conclusions from choice 3 in evaluation to choice 2 Option 2 is habitually arrives out at the peak (no dangers of any loss or no gain in both of them. Therefore if there was a alternative between choice 1 and choice 2 its should be the decisive choice

* Even though The EMV of choice 1 is larger that the EMV of choice 2 there is not much percentage distinction between the anticipated standards of the two (9.78%), thus I will use sensitivity investigation to work out which will be the best choice.

Risk investigation for choice 1

In choice 1 there is identical riskiness for both chopping dwell

ings and subcontracting thus the firm should chose subcontracting because it has the largest EMV. In

Sorry, but full essay samples are available only for registered users

Choose a Membership Plan
choice 1 the largest probability in year 1 after selecting sub contracting is a earnings of 750,000 and there is not much oscillation between other probabilities (N&O) thus year 1 is not dodgy, although subcontracting gets dodgy in year 2 and 3, in these years there is 25% possibility of producing a loss of 900,000 which is a gigantic allowance, and very undesirable since this possibility extends for year 3 as well than the probability of producing a loss in one of them becomes.

The probability of the firm producing a earnings of 1,000,000 in both year 2 and year 3 is (0.75*0.75=0.5625) thus the possibilities of a loss producing year occurrence in any one of years 2 or 3 is 1-0.5625=0.4375, this is a large-scale probability and a risk aversive firm would bypass this scenario at all cost. Therefore it appears that choice 1 is undesirable in evaluation to choice 2 (0.5625%)

Sensitivity Analysis of choice 1;

Sensitivity investigation locations the rudimentary inquiry, how should the conclusion manufacturer adjust his or her conclusions in the lightweight of unsure outcomes

For the sensitivity investigation I am going to use the Laplace (equal likelihood) benchmark to work out the average payoff of Option 2 at the end of year 3 (because there is not much percentage deference between each outcome).

Laplace criterion for choice 2:- 1.8+1.35+1.5+1.05+1.65+1.2+1.35+0.9=1.138

8

Therefore 1138000 million is the average payoff of choice 2 utilising Laplace criterion. Now I can use this worth to assist me convey out a sensitivity investigation for Option 1, subcontract, probability node K (this is the dodgy part of Option 1)-“see appendices A”. I am utilising K because it is the most probable conclusion and the Average.

Additional Information the business needs

The firm desires to be money-making in the next 9 months in alignment to stay alive. Therefore the business desires the allowance of credit that desires to be give back, this data will notify display them the allowance of earnings they require to have in the short run (first year), thus they can reassess which choices the would consider. (if the credit is very high then choice 1 might be more attractive then choice 2). This is a quantitative data

Q2: Part A

In this part, minutia of the study conceive and instrumentation for the questionnaire review are provided. The methodology for the data assemblage and the minutia of the operationalisation (how foremost variables will be measured) will be described.

The study will be finished in a natural environment (natural environment is where happenings commonly happen without any interference) Sekaran (1992). Because the reason of the study is to enquire the connection between job approval and organisational firm promise, a natural setting or natural environment is most preferred. Ethical matters will be bypassed by selecting this kind of setting.

Because this study will try to enquire the connection between the unaligned and the reliant variable, this study is analytical study in nature.

Data Collection Technique

The data assemblage method that will be utilised to assemble data will be solely a questionnaire survey. Sekaran (1992) characterised questionnaire as a formulated in writing set of inquiries to which respondents record their responses, generally inside rather nearly characterised alternatives. Saunders et al (2003) furthermore characterised questionnaire as a general period encompassing all data assemblage methods in which each individual is inquired to reply to the identical set of inquiries in a fixed order.

Part B:

For a enterprise to be thriving, it is absolutely crucial that workers are wholesome and are adept to work at full capacity. Providing preventative care that encourages a healthier way of life for workers while permitting employee’s to invest in the future of their business makes sound enterprise sense. Thousands of Americans extend to disregard their wellbeing matters because they manage not have get access to wellbeing care or because they manage not take the time to visit their physicians (Migliore). A workplace wellness program empowers workers to lead healthier lives. Employers have been strike hard by the rising charges of supplying workers with wellbeing care benefits. Companies are applying a kind of worker wellness programs to counteract inflating health claims. Most workers spend an important allowance of time at work and don’t have sufficient time to gaze after their health.

Part C:

In alignment to realise random trying, you require to become well renowned with a twosome of rudimentary statistical concepts.

1. Error – This is that “plus or minus X%” that you discover about. What it entails is that you seem assured that your outcomes have a mistake of nothing less than X%.

2. Confidence – This is how assured you seem about your mistake level. Expressed as a percentage, it is the identical as saying if you were to perform the review multiple times, how often would you anticipate to get alike results.

These two notions work simultaneously to work out how unquestionable your review outcomes are. For demonstration, if you have 90% self-assurance with an mistake of 4%, you are saying that if you were to perform the identical review 100 times, the outcomes would be inside +/- 4% of the first time you ran the review 90 times out of 100.

If you are not certain what sort of mistake you can endure and what grade of self-assurance you require, a good direct of thumb is to objective for 95% self-assurance with a 5% mistake level.

Error is furthermore mentioned to as the “confidence interval” and Confidence is furthermore renowned as “Confidence Level.” In alignment to bypass disarray, these notions will easily be mentioned to as “Error” and “Confidence” in this article.

Determining the “Correct” Sample Size

Determining the “correct” experiment dimensions needs 3 parts of data

1. The dimensions of your community

2. Your yearned mistake grade (e.g. 5%)

3. Your yearned grade of self-assurance (e.g. 95%)

Performing a Stratified Random Sample

If you are accomplishing a stratified random experiment, there are a twosome of added steps that you require to take.

1. Determine the dimensions of the least significant subgroup in your population. For demonstration, if you desire to gaze at males vs. females and there are less females, then this is the assembly you desire to gaze at.

2. Calculate the number of persons needed to accomplish your yearned mistake grade and grade of self-assurance for this subgroup.

3. Calculate what percentage of persons that you will require to review inside this subgroup (number of persons to review split up by total subgroup size).

4. Finally, assess the number of persons in each of the other subgroups that are required to accomplish this identical ratio (multiply the percentage from step 3 by the dimensions of each of the other subgroups). This is how numerous persons you will require to review inside each group.

Remember, a bigger assembly entails a lesser percentage needed to get the identical grade of accuracy. That is why we start with the least significant assembly and work our way up. The outcomes you get from the bigger assemblies should really be even more unquestionable than the outcomes from the least significant assembly, but you can not less than be certain that each assembly encounters your smallest correctness requirements.

Part D:

If you have utilised a specific scale before and require to contrast outcomes, use the identical scale. Four on a five-point scale is not matching to eight on a ten-point scale. Someone who rates an piece “4” on a five-point scale might rate that piece any location between “6” and “9” on a ten-point scale.

Do not use contradictory figures when inquiring for ratings. Some persons manage not like to give contradictory figures as answers. A scale of -2 to +2 is mathematically matching to a scale of 1 to 5, but in perform you will get less persons picking -2 or -1 than would choose 1 or 2. If you desire 0 to be the midpoint of a scale when you make accounts, you can heaviness the responses after data assemblage to get that result.

We can write a custom essay on

Risk Analysis and Decision Making Essay Sample ...
According to Your Specific Requirements.

Order an essay

You May Also Find These Documents Helpful

Deriving Consensus Rankings Via Multicriteria Decision Making...

Purpose – This paper takes a cautionary stance to the impact of marketing mix on customer satisfaction, via a case study deriving consensus rankings for benchmarking on selected retail stores in Malaysia. Design/methodology/approach – ELECTRE I model is used in deriving consensus rankings via multicriteria decision making method for benchmarking base on the marketing mix model 4Ps. Descriptive analysis is used to analyze the best practice among the four marketing tactics. Findings – Outranking methods in consequence constitute a strong base on which to found the entire structure of the behavioral theory of benchmarking applied to development of marketing strategy. Research limitations/implications – This study has looked only at a limited part of the puzzle of how consumer satisfaction translates into behavioral outcomes. Practical implications – The study provides managers with guidance on how to generate rough outline of potential marketing activities that can be used to take advantage of...

The Application of Decision Making Theory

Learning objectives 1. Understand basic control processes in decision-making, and develop appropriate control systems to support specific strategies 2. Identify and evaluate appropriate performance measures to properly assess performance 3. Recognise the importance and the impact of effective information systems in supporting decisions concerning evaluation and control 4. Describe the determinants of decision success and understand the decision making matrix 2 Introduction Text based sources for the Notes: Wheelen and Hunger, Ch 11 (2010) Text Pages 367 - 393 In this module, we will focus on evaluating decisions. In doing so, we will reflect on the operational and strategic decision-making processes and, in relation to the latter, take a closer look at the ‘strategic gap’. We will examine the evaluation and control process within the decision-making process, with a particular focus on measures and steering controls. Finally, we examine what Harrison has labelled as the determinants of strategic decision-making success,...

The Importance of Data for Operations Management...

In order to be able to make well guided decisions, one needs well based facts and therefore one is in continuous need of quality data. The same goes for operations management; data of substance is a must to run a company in its optimal levels of efficiency, effectiveness and capacity. The five levels of Data Quality Maturity according to Gartner are Aware, Reactive, Proactive, Managed and Optimized. Using these levels and applying them to organizations one can determine the data quality they possess. Of course one has to make a reality check because the theory doesn´t always adapt and molds to the reality, the Costa Rican environment portrays a good example of poor quality data. On the ongoing essay one will address the topics stated beforehand. After taking a look at Gartner´s model and studying its levels, it is clear that the Costa Rican environment in general, fall into the...

Popular Essays

logo

Emma Taylor

online

Hi there!
Would you like to get such a paper?
How about getting a customized one?