Business Decision Tools Essay
Linear Programming is a process of allocating scarce resources in a best possible way. The fundamental idea is to construct a mathematical illustration of the objective and restrictions on the resources. Linear programming is a modeling process that is used to aid managers construct logical and informed decisions (Gass, 1985). All date and input factors are known with assurance.
When somebody speaks of “linear programming” they are not referring to programming in the sense that somebody speaks of today. As soon as the term was initiated in the 1940’s, programming was identical with scheduling or planning. Linear programming (LP) is a subcategory of mathematical programming (Nazareth, 1987).
The term “Programming” is used in the sense of “planning”; the vital relationship to computer programming was incidental to the choice of name. Consequently the phrase “LP program” to refer to a piece of software is not a redundancy, even though the word “code” is used instead of “program” to keep away from the possible ambiguity.
An objective function is said to be the function of one or more variables that one is concerned in either maximizing or minimizing. The function could correspond to the cost or profit of several manufacturing process. Linear programming is a vital field of optimization for various reasons. Numerous practical problems in operations research can be expressed as linear programming problems. Particular special cases of linear programming, for instance network flow problems and multicommodity flow problems are considered significant enough to have created much research on specialized algorithms for the solution.
A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Traditionally, ideas from linear programming have motivated lots of the central concepts of optimization theory, for instance duality, decomposition, and the impact of convexity and its generalizations. Similarly, linear programming is broadly used in microeconomics and business management, either to maximize the income or minimize the costs of a production scheme.
Even though all linear programs can be place into the Standard Form, in practice it may not be essential to do so. For instance, though the Standard Form necessitate all variables to be non-negative, most good LP software permits general bounds l <= x <= u, where l and u are vectors of known lower and upper bounds. Individual elements of these bounds vectors can even be infinity and/or minus-infinity.
This permits a variable to be without an explicit upper or lower bound, even though of course the constraints in the A-matrix will need to put implied restrictions on the variable or else the problem may have no finite solution. Likewise, good software permits b1 <= Ax <= b2 for arbitrary b1, b2; the user need not hide inequality constraints by the inclusion of explicit “slack” variables, nor write Ax >= b1 and Ax <= b2 as two separate constraints. Furthermore LP software can handle maximization problems just as easy as minimization.
The significance of linear programming derives in part from its different applications and in part from the existence of excellent general-purpose techniques for finding optimal solutions. These techniques take as input only an LP, and establish a solution without reference to any information regarding the LP’s origins or special structure. They are fast and reliable over a substantial range of problem sizes and applications. Two families of solution techniques are in broad use today.
Both visits an increasingly improving series of trial solutions, until a solution is reached that fulfill the conditions for an optimum. Simplex methods, introduced about 50 years ago, visit “basic” solutions computed by fixing adequate of the variables at their bounds to reduce the constraints Ax = b to a square system, which can be solved for distinctive values of the remaining variables. Basic solutions stand for tremendous boundary points of the feasible region defined by Ax = b, x >= 0, and the simplex method can be viewed as moving from one such point to another along the edges of the boundary.
Barrier or interior-point methods, by contrast, visit points within the interior of the feasible region. These methods obtain from techniques for nonlinear programming that were developed and popularized in the 1960s, but the application to linear programming dates back only to innovative analysis in 1984. Linear has proved valuable for modeling many and diverse types of problems in planning, routing, scheduling, assignment, and design. Industries that make use of LP and its extensions include transportation, energy, telecommunications, and manufacturing of many kinds.
An illustrations of managerial decision situations in which the linear programming technique would be useful, Linear programming can be used for solving any type of constrained optimization problem where the relations involved can be approximated by linear equations. Linear revenue, cost, and profit relations will be observed when output prices, input prices, and average variable costs are constant.
Linear programming is particularly relevant when slack or excess capacity is possible. Though feasible, linear programming techniques are normally not employed in problems where the constraints can all be treated as equalities, since alternate and somewhat simpler solution techniques are available for such problems.
Linear programming has been used to determine optimal production schedules, investment portfolios, delivery routes, input combinations, product mixes, advertising media mixes, capital budgets, and so on.
Bayes’ theorem (also known as Bayes’ rule or Bayes’ law) is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. In some interpretations of probability, Bayes’ theorem tells how to update or revise beliefs in light of new evidence: a posteriori.
The probability of an event A conditional on another event B is generally different from the probability of B conditional on A. On the other hand, there is a definite relationship between the two, and Bayes’ theorem is the statement of that relationship. As a formal theorem, Bayes’ theorem is valid in all interpretations of probability.
On the other hand, frequentist and Bayesian interpretations disagree regarding the kinds of things to which probabilities should be assigned in applications: frequentists assigned probabilities to random events according to their frequencies of occurrence or to subsets of populations as proportions of the whole; Bayesians assign probabilities to propositions that are uncertain. A consequence is that Bayesians have more frequent occasion to use Bayes’ theorem. The articles on Bayesian probability and frequentist probability discuss these debates at greater length.
Bayes’ theorem can be used in artificial systems capable of managing complex tasks in a real world environment. The Bayesian theorem is a model for rational judgment when only uncertain and incomplete information is available. The application of Bayes’ theorem enables the modification of initial probability estimates, so the decision process becomes refined as new evidence is introduced.
In mathematics and statistics, a probability distribution, more properly called a probability distribution function, assigns to every interval of the real numbers a probability, so that the probability axioms are satisfied (Papoulis, 1984). In technical terms, a probability distribution is a probability measure whose domain is the Borel algebra on the reals.
A probability distribution is a special case of the more general notion of a probability measure, which is a function that assigns probabilities satisfying the Kolmogorov axioms to the measurable sets of a measurable space. Additionally, some authors define a distribution generally as the probability measure induced by a random variable X on its range – the probability of a set B is P(X ? 1(B)). However, this article discusses only probability measures over the real numbers.
Every random variable gives rise to a probability distribution, and this distribution contains most of the important information about the variable. If X is a random variable, the corresponding probability distribution assigns to the interval [a, b] the probability Pr[a ? X ? b], i.e. the probability that the variable X will take a value in the interval [a, b]. The probability distribution of the variable X can be uniquely described by its cumulative distribution function F(x), which is defined by for any x in R.
A distribution is called discrete if its cumulative distribution function consists of a sequence of finite jumps, which means that it belongs to a discrete random variable X: a variable which can only attain values from a certain finite or countable set. By one convention, a distribution is called continuous if its cumulative distribution function is continuous, which means that it belongs to a random variable X for which Pr[ X = x ] = 0 for all x in R. Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These can be expressed by a probability density function: a non-negative Lebesgue integrable function f defined on the real numbers such that
for all a and b. Of course, discrete distributions do not admit such a density; there also exist some continuous distributions like the devil’s staircase that do not admit a density.
The support of a distribution is the smallest closed set whose complement has probability zero. The probability distribution of the sum of two independent random variables is the convolution of each of their distributions. The probability distribution of the difference of two random variables is the cross-correlation of each of their distributions.
The binomial distribution describes the possible number of times that a particular event will occur in a sequence of observations. The binomial distribution is used when a researcher is interested in the occurrence of an event, not in its magnitude. For instance, in a clinical trial, a patient may survive or die. The researcher studies the number of survivors, and not how long the patient survives after treatment. Another instance is whether a person is ambitious or not.
Here, the binomial distribution describes the number of ambitious persons, and not how ambitious they are. In general the binomial distribution counts the number of events in a set of trial, e.g. number of deaths in a cohort of patients, number of broken eggs in a box of eggs. Other situations in which binomial distributions arise are quality control, public opinion surveys, medical research, and insurance problems.
The binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p (Abdi, 2004). Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, then the binomial distribution is the Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical importance (Devroye, 1986). Below is a graphical illustration of Binomial Distribution
The need to design a test that proves that a product has met a certain reliability goal is a fairly common occurrence in industry. In numerous cases, the manufacturers desire to prove that their product designs meet a certain reliability specification, but do not have the resources to conduct a full-scale reliability/life test. The result is not a rigorous life test in which failure information is eagerly sought, but rather a demonstration test in which failures are hoped to be avoided.
These demonstration tests are often of a limited scope and are frequently conducted by product engineers with little or no knowledge of life data analysis principles. These demonstration tests often occur toward the end of a product development cycle, when the product design has been improved to the point at which failures are, it is hoped, relatively infrequent. There is usually a great deal of pressure to maintain the development schedule and as a result, tests that merely demonstrate an acceptable minimum reliability level are required. Although these tests yield minimal meaningful information about the product’s life characteristics, they are a common requirement for many engineers in the design and manufacturing arena.
Consequently, these engineers need to be able to design and allocate resources for these tests without having a great deal of detailed information beforehand. Fortunately, the cumulative binomial distribution can be put to use to help develop a rough estimate of the test design, which includes test duration and the number of units to be tested, without having to develop a complete life test. Otherwise, a large quantity of failures must be achieved before any conclusions can be drawn about the reliability of the product. The cumulative binomial distribution can also be used to analyze the results of tests in which there were few or no failures.
Basically, the test design process involves solving the cumulative binomial equation for one variable, given that the other variables are known or can be assumed. This is particularly significant for the variable R, the reliability. An estimate of the reliability value for the duration of the test is necessary when using the cumulative binomial for test design. In some cases, it may be necessary to provide values for the parameter estimates of the product’s life distribution for more detailed calculations. The next sections describe how test design information can be obtained by solving the cumulative binomial equation. Solving the cumulative binomial equation for certain variables can be difficult and in some cases almost impossible without the use of a computer.
Abdi, H. Binomial Distribution: Binomial and Sign Tests. In N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage.”. (2007).
Devroye, Luc. Non-Uniform Random Variate Generation, New York: Springer-Verlag, 1986.
Gass, Saul I., Linear Programming: Methods and Applications, 5th edition. International Thomson Publishing, 1985.
Nazareth, J.L., Computer Solution of Linear Programs, Oxford University Press, New York and Oxford, 1987.
Papoulis, Athanasios. Probability, Random Variables, and Stochastic Processes, second edition
Need essay sample on "Business Decision Tools"? We will write a custom essay sample specifically for you for only $ 13.90/page