Kinds of market risks Essay
Though the difference between the average returns and the risks of the two assets is insignificant, if an investor is considered to choose one of the two assets based on their mean and standard deviations, the investor’s perception of risk will result in different choice of investment. Considering the results precisely, a risk averse investor would choose Asset A, because his main preference would be lower risk. The risk loving investor would prefer Asset B as the satisfaction level will be higher because both the standard deviation and the average return is higher.
A risk neutral investor remains unaffected by the level of risk attached to an asset and prefers a higher return. Therefore, a risk neutral investor will choose Asset B, which has higher average return. The most popular and traditional measure for risk is the ‘variance / standard deviation’, but it is fairly different from a common man’s considerations for risk. Some analysts have claimed that mean and standard deviation is an inadequate measure of risk (Barberis, 1998). Standard Deviation implies no particular bad and no particular good asset but simply refers to a measure of the possibility of being ‘surprised’ (Ciancanelli et al., 2001).
Using standard deviation as a
Need essay sample on "Kinds of market risks"? We will write a custom essay sample specifically for you for only $ 13.90/page
If an investor is mainly concerned in the maximum ‘downside’ risk, the concept of Value at Risk (VAR) is said to be more suitable instrument (Goorbergh and Vlaar, 1999). A much improved approach is to let the distribution of returns to be less constrained and focus on the tail of the distribution. Value at Risk (VAR) is the most important model that has emerged as the basic means of measuring risk and has been called the new science of risk management (Cook, 1997) which has been widely adopted as a dominant risk measurement tool for investors (Jorion and Khoury, 1996; Dowd, 2004; Basak and Shapiro, 2001).
Value at Risk is described as the single, statistical measure of possible portfolio losses (Dowd, 2004), which is easily interpretable and also allows users to focus on normal market conditions (Pritsker, 1997). It is defined as an estimate, with a predefined confidence interval, of how much one can lose from holding an asset in a particular time period (Cook, 1997). By assuming that investors are affected by the odds of a really big loss, VAR tells us, what is the worst an investor could lose in a given period, at a particular confidence level? There is a 95% confidence or 5% chance that the returns in an investment will fall below 1,500 in any month.
The losses larger than the value at risk will occur for a specified small probability. The information provided by VAR can be used in many ways (Dowd, 2004). If a VAR of an investment is too high, it implies that the investment’s risk is too high which also means larger capital requirement. The investment firms can use VAR to consider the risks of various potential investments ahead of making decisions. It can also help them implement portfolio-wide hedging strategies. At times, VAR is also used as a means to reward traders, managers and other investors.
The first category of VAR method is Historical Simulation (HS) that relies on a certain amount of past historical observations for a time period. Instead of using these observations to estimate the investment’s mean and standard deviation, the historical simulation aims to use the actual percentages of the observation period as value at risk measures. Use of this method does not require any assumptions on distribution of returns as it merely uses only the empirical distribution of returns. The ‘plain’ HS is the simplest form of calculating VAR using historical simulation.
This method estimates VAR by creating sub-samples of the past returns but the highly accurate results are limited to sub-samples of fairly large size, hence, it is ineffective for estimating extreme risks. These drawbacks are attended by the Extreme Value Theory, which considers the performance of extremely low returns that cause large losses. The variance covariance method is the most widely used method for VAR (Vlaar, 1999). A vast variety of VAR models based on the variance techniques assume the data to be normally distributed (Goorbergh and Vlaar, 1999; Hyung and Vries, 2005).
The variance-covariance method also assumes the series of returns to be normally distributed and independent. And for the purpose of estimating the standard deviation for longer horizons, the standard deviation is multiplied by the square root of the time. In simple words it requires us to calculate only two factors, an expected or average return () and a standard deviation (? ) as they allow us to draw a normal distribution curve. With the assumption of normally distributed returns on an investment, we can say that the 95% confidence is equal to 1. 645 standard deviations and 99% confidence is equal to 2.33 standard deviations away from the mean.
The most commonly used confidence level is 95%. The benefit of the normal distribution curve is said to be that the investor comes to know where the worst 5% and 1% lie on the curve. (Hendricks, 1996). The third type of model includes building of a model for future returns and involves a very large number of computations done randomly on the assumed distribution of returns. This method refers to a series of calculations of random trials, without describing the underlying methodology. Many users of Monte Carlo method make use of it purely for generation of random results.
Most financial analysts accepted VAR with different levels of agreement but many also criticised it, suggesting that it has certain limitations. Although the concept of using VAR is more realistic with the investors’ perception to risk, their applicability is somewhat limited since the minimum returns, confidence levels or disaster probabilities are hard to specify (Huisman et al. , 1999). The Historical Simulation and the Monte Carlo models of Value at Risk have some favourable arguments but the HS method requires a remarkable amount of calculations on the past data and the Monte Carlo method complex and less widely used.
The most popular approach to calculate VAR is byu using the Variance Covariance method. While making the assumptions for VAR, the fact that many researcher have established that the distributions are more fat-tailed than predicted by normal distribution (Hendricks, 1996). VAR is a benchmark means for measurement of risk, which has been used as a basis for other more complex and better approaches to measure risk more accurately.