A Continuous Random Variable X Assumes an Infinitely Uncountable Number of Distinct Values
Discrete Random Variable
A discrete random variable is one that can assume only a finite, or countably infinite, number of distinct values.
From: The Joy of Finite Mathematics , 2016
Basic Concepts in Probability
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
1.2.3 Continuous Random Variables
Discrete random variables have a set of possible values that are either finite or countably infinite. However, there exists another group of random variables that can assume an uncountable set of possible values. Such random variables are called continuous random variables. Thus, we define a random variable X to be a continuous random variable if there exists a nonnegative function f X (x), defined for all real x∈(−∞,∞), having the property that for any set A of real numbers,
The function f X (x) is called the probability density function (PDF) of the random variable X and is defined by
This means that
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000013
Discrete Lifetime Models
N. Unnikrishnan Nair , ... N. Balakrishnan , in Reliability Modelling and Analysis in Discrete Time, 2018
Geeta Distribution
A discrete random variable X is said to have Geeta distribution with parameters θ and α if
The distribution is L-shaped and unimodal with
and
A recurrence relation is satisfied by the central moments of the form
Estimation of the parameters can be done by the method of moments or by the maximum likelihood method. Moment estimates are
with and being the sample mean and variance. The maximum likelihood estimates are
and is iteratively obtained by solving the equation
with as the sample size and is the frequency of . For details, see Consul (1990). As the Geeta model is a member of the MPSD, the reliability properties can be readily obtained from those of the MPSD discussed in Section 3.2.2.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128019139000038
Probability Distributions of Interest
N. Balakrishnan , ... M.S Nikulin , in Chi-Squared Goodness of Fit Tests with Applications, 2013
8.1.3 Poisson distribution
The discrete random variable X follows the Poisson distribution with parameter , if
and we shall denote it by . It is easy to show that
and so
The distribution function of X is
(8.4)
where
is the incomplete gamma function. Often, for large values of , to compute (8.4), we can use a normal approximation
Let be a sequence of independent and identically distributed random variables following the same Bernoulli distribution with parameter , with
Let
Then, uniformly for , we have
From this result, it follows that for large values of n,
Often this approximation is used with the so-called continuity correction given by
We shall now describe the Poisson approximation to the binomial distribution. Let be a sequence of binomial random variables, , , such that
Then,
In practice, this means that for "large" values of n and "small" values of p, we may approximate the binomial distribution by the Poisson distribution with parameter , that is,
It is of interest to note that (Hodges and Le Cam, 1960)
Hence, if the probability of success in Bernoulli trials is small, and the number of trials is large, then the number of observed successes in the trials can be regarded as a random variable following the Poisson distribution.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123971944000089
Multiple Random Variables
Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014
Section 5.7 Covariance and Correlation Coefficient
- 5.20
-
Two discrete random variables X and Y have the joint PMF given by
- a.
-
Are X and Y independent?
- b.
-
What is the covariance of X and Y?
- 5.21
-
Two events A and B are such that and Let the random variable X be defined such that X = 1 if event A occurs and X = 0 if event A does not occur. Similarly, let the random variable Y be defined such that Y = 1 if event B occurs and Y = 0 if event B does not occur.
- a.
-
Find E[X] and the variance of X
- b.
-
Find E[Y] and the variance of Y.
- c.
-
Find ρ XY and determine whether or not X and Y are uncorrelated.
- 5.22
-
A fair die is tossed three times. Let X be the random variable that denotes the number of 1's and let Y be the random variable that denotes the number of 3's. Find the correlation coefficient of X and Y.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128008522000055
Mathematical foundations
Xin-She Yang , in Introduction to Algorithms for Data Mining and Machine Learning, 2019
2.4.1 Random variables
For a discrete random variable X with distinct values such as the number of cars passing through a junction, each value may occur with certain probability . In other words, the probability varies and is associated with the corresponding random variable. Traditionally, an uppercase letter such as X is used to denote a random variable, whereas a lowercase letter such as represents its values. For example, if X means a coin-flipping event, then (tail) or 1 (head). A probability function is a function that assigns probabilities to all the discrete values of the random variable X.
As an event must occur inside a sample space, the requirement that all the probabilities must be summed to one, which leads to
(2.33)
For example, the outcomes of tossing a fair coin form a sample space. The outcome of a head (H) is an event with probability , and the outcome of a tail (T) is also an event with probability . The sum of both probabilities should be one, that is,
(2.34)
The cumulative probability function of X is defined by
(2.35)
Two main measures for a random variable X with given probability distribution are its mean and variance. The mean μ or expectation of is defined by
(2.36)
for a continuous distribution and the integration is within the integration limits. If the random variable is discrete, then the integration becomes the weighted sum
(2.37)
The variance is the expectation value of the deviation squared, that is, . We have
(2.38)
The square root of the variance is called the standard deviation, which is simply σ.
The above definition of mean is essentially the first moment if we define the kth moment of a random variable X (with a probability density distribution ) by
(2.39)
Similarly, we can define the kth central moment by
(2.40)
where μ is the mean (the first moment). Thus, the zeroth central moment is the sum of all probabilities when , which gives . The first central moment is . The second central moment is the variance , that is, .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128172162000090
Sampling distributions
Kandethody M. Ramachandran , Chris P. Tsokos , in Mathematical Statistics with Applications in R (Third Edition), 2021
4.4 The normal approximation to the binomial distribution
We know that a binomial random variable Y, with parameters n and P = P(success), can be viewed as the number of successes in n trials and can be written as:
where
The fraction of successes in n trials is:
Hence, Y/n is a sample mean. Since E(X i ) = P and Var (X i ) = P(1 – P), we have:
and
Because by the central limit theorem, Y has an approximate normal distribution with mean μ = np and variance σ2 = np(1 – P). Because the calculation of the binomial probabilities is cumbersome for large sample sizes n, the normal approximation to the binomial distribution is widely used. A useful rule of thumb for use of the normal approximation to the binomial distribution is to make sure n is large enough if np ≥ 5 and n(1 – P) ≥ 5. Otherwise, the binomial distribution may be so asymmetric that the normal distribution may not provide a good approximation. Other rules, such as np ≥ 10 and n(1 – P) ≥ 10, or np(1 – P) ≥ 10, are also used in the literature. Because all of these rules are only approximations, for consistency's sake we will use np ≥ 5 and n(1 – P) ≥ 5 to test for largeness of sample size in the normal approximation to the binomial distribution. If the need arises, we could use the more stringent condition np(1 – P) ≥ 10.
Recall that discrete random variables take no values between integers, and their probabilities are concentrated at the integers as shown in Fig. 4.7. However, the normal random variables have zero probability at these integers; they have nonzero probability only over intervals. Because we are approximating a discrete distribution with a continuous distribution, we need to introduce a correction factor for continuity which is explained next.
Correction for continuity for the normal approximation to the binomial distribution
- (a)
-
To approximate P(X ≤ a) or P(X>a), the correction for continuity is (a + 0.5), that is,
and
- (b)
-
To approximate P(X ≥ a) or P(X<a), the correction for continuity is (a − 0.5), that is,
and
- (c)
-
To approximate P(a ≤ X ≤ b), treat ends of the intervals separately, calculating two distinct z-values according to steps (a) and (b), that is,
- (d)
-
Use the normal table to obtain the approximate probability of the binomial event.
The shaded area in Fig. 4.8 represents the continuity correction for P(X = i).
Example 4.4.2
A study of parallel interchange ramps revealed that many drivers do not use the entire length of parallel lanes for acceleration, but seek, as soon as possible, a gap in the major stream of traffic to merge. At one site on Interstate Highway 75, 46% of drivers used less than one-third of the lane length available before merging. Suppose we monitor the merging pattern of a random sample of 250 drivers at this site.
- (a)
-
What is the probability that fewer than 120 of the drivers will use less than one-third of the acceleration lane length before merging?
- (b)
-
What is the probability that more than 225 of the drivers will use less than one-third of the acceleration lane length before merging?
Solution
First we check for adequacy of the sample size:
Both are greater than 5. Hence, we can use the normal approximation. Let X be the number of drivers using less than one-third of the lane length available before merging. Then X can be considered to be a binomial random variable. Also,
and
Thus,
- (a)
-
, that is, we are approximately 71.57% certain that fewer than 120 drivers will use less than one-third of the acceleration length before merging.
- (b)
-
that is, there is almost no chance that more than 225 drivers will use less than one-third of the acceleration lane length before merging.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012817815700004X
Pairs of Random Variables
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
Section 5.4 Conditional Distribution, Density and Mass Functions
- 5.17
-
For the discrete random variables whose joint PMF is described by the table in Exercise 5.14, find the following conditional PMFs:
- (a)
-
PM (m|N=2);
- (b)
-
PM (m|N≥2);
- (c)
-
P N (n|M≠2).
- 5.18
-
Consider again the random variables in Exercise 5.11 that are uniformly distributed over an ellipse.
- (a)
-
Find the conditional PDFs, fX |Y (x|y) and fY |X (y|x).
- (b)
-
Find fX |Y>1(x).
- (c)
-
Find f Y|{|X| <1} (y)
- 5.19
-
Recall the random variables of Exercise 5.12 that are uniformly distributed over the region |X| + |Y| ≤ 1.
- (a)
-
Find the conditional PDFs, fX|Y (x|y) and fY |X (y|x).
- (b)
-
Find the conditional CDFs, FX |Y (x|y) and FY |X (y|x).
- (c)
-
Find fX |{Y > 1/2}(x) and FX |{Y > 1/2}(X).
- 5.20
-
Suppose a pair of random variables (X, Y) is uniformly distributed over a rectangular region, A: x 1 < X < X 2, y 1 < Y < y 2. Find the conditional PDF of (X, Y) given the conditioning event (X, Y) ɛ B, where the region B is an arbitrary region completely contained within the rectangle A as shown in the accompanying figure.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500084
Introduction to Probability Theory
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
2.8 Discrete Random Variables
Suppose we conduct an experiment, E, which has some sample space, S. Furthermore, let ξ be some outcome defined on the sample space, S. It is useful to define functions of the outcome ξ, X = f(ξ). That is, the function f has as its domain all possible outcomes associated with the experiment, E. The range of the function f will depend upon how it maps outcomes to numerical values but in general will be the set of real numbers or some part of the set of real numbers. Formally, we have the following definition.
Definition 2.9: A random variable is a real valued function of the elements of a sample space, S. Given an experiment, E, with sample space, S, the random variable X maps each possible outcome, ξ ∈ S, to a real number X(ξ) as specified by some rule. If the mapping X(ξ) is such that the random variable X takes on a finite or countably infinite number of values, then we refer to X as a discrete random variable; whereas, if the range of X(ξ) is an uncountably infinite number of points, we refer to X as a continuous random variable.
Since X = f(ξ) is a random variable whose numerical value depends on the outcome of an experiment, we cannot describe the random variable by stating its value; rather, we must give it a probabilistic description by stating the probabilities that the variable X takes on a specific value or values (e.g., Pr(X=3) or Pr(X > 8)). For now, we will focus on random variables which take on discrete values and will describe these random variables in terms of probabilities of the form Pr(X=x). In the next chapter when we study continuous random variables, we will find this description to be insufficient and will introduce other probabilistic descriptions as well.
Definition 2.10: The probability mass function (PMF), PX (x), of a random variable, X, is a function that assigns a probability to each possible value of the random variable, X. The probability that the random variable X takes on the specific value x is the value of the probability mass function for x. That is, PX (x) = Pr(X=x). We use the convention that upper case variables represent random variables while lower case variables represent fixed values that the random variable can assume.
Example 2.23
A discrete random variable may be defined for the random experiment of flipping a coin. The sample space of outcomes is S = {H, T}. We could define the random variable X to be X(H) = 0 and X(T) = 1. That is, the sample space H, T is mapped to the set {0, 1} by the random variable X. Assuming a fair coin, the resulting probability mass function is PX (0) = 1/2 and PX (l) = 1/2. Note that the mapping is not unique and we could have just as easily mapped the sample space {H, T} to any other pair of real numbers (e.g., {1,2}).
Example 2.24
Suppose we repeat the experiment of flipping a fair coin n times and observe the sequence of heads and tails. A random variable, Y, could be defined to be the number of times tails occurs in n trials. It turns out that the probability mass function for this random variable is
The details of how this PMF is obtained will be deferred until later in this section.
Example 2.25
Again, let the experiment be the flipping of a coin, and this time we will continue repeating the event until the first time a heads occurs. The random variable Z will represent the number of times until the first occurrence of a heads. In this case, the random variable Z can take on any positive integer value, 1 ≤ Z < ∞. The probability mass function of the random variable Z can be worked out as follows:
Hence,
PZ (n) = 2−n , n = 1, 2, 3,….
Example 2.26
In this example, we will estimate the PMF in Example 2.24 via MATLAB simulation using the relative frequency approach. Suppose the experiment consists of tossing the coin n = 10 times and counting the number of tails. We then repeat this experiment a large number of times and count the relative frequency of each number of tails to estimate the PMF. The following MATLAB code can be used to accomplish this. Results of running this code are shown in Figure 2.3.
Try running this code using a larger value for m. You should see more accurate relative frequency estimates as you increase m.
From the preceding examples, it should be clear that the probability mass function associated with a random variable, X, must obey certain properties. First, since PX (x) is a probability it must be non-negative and no greater than 1. Second, if we sum PX (x) over all x, then this is the same as the sum of the probabilities of all outcomes in the sample space, which must be equal to 1. Stated mathematically, we may conclude that
When developing the probability mass function for a random variable, it is useful to check that the PMF satisfies these properties.
In the paragraphs that follow, we list some commonly used discrete random variables, along with their probability mass functions, and some real-world applications in which each might typically be used.
A. Bernoulli Random Variable
This is the simplest possible random variable and is used to represent experiments which have two possible outcomes. These experiments are called Bernoulli trials and the resulting random variable is called a Bernoulli random variable. It is most common to associate the values {0,1} with the two outcomes of the experiment. If X is a Bernoulli random variable, its probability mass function is of the form
(2.34)
The coin tossing experiment would produce a Bernoulli random variable. In that case, we may map the outcome H to the value X = 1 and T to X = 0. Also, we would use the value p = 1/2 assuming that the coin is fair. Examples of engineering applications might include radar systems where the random variable could indicate the presence (X = 1) or absence (X = 0) of a target, or a digital communication system where X = 1 might indicate a bit was transmitted in error while X = 0 would indicate that the bit was received correctly. In these examples, we would probably expect that the value of p would be much smaller than 1/2.
B. Binomial Random Variable
Consider repeating a Bernoulli trial n times, where the outcome of each trial is independent of all others. The Bernoulli trial has a sample space of S = {0,1} and we say that the repeated experiment has a sample space of Sn = { 0, 1 } n , which is referred to as a Cartesian space. That is, outcomes of the repeated trials are represented as n element vectors whose elements are taken from S. Consider, for example, the outcome
The probability of this outcome occurring is
(2.36)
In fact, the order of the 1's and 0's in the sequence is irrelevant. Any outcome with exactly k 1's and n − k 0's would have the same probability. Now let the random variable X represent the number of times the outcome 1 occurred in the sequence of n trials. This is known as a binomial random variable and takes on integer values from 0 to n. To find the probability mass function of the binomial random variable, let Ak be the set of all outcomes which have exactly k 1's and n − k 0's. Note that all outcomes in this event occur with the same probability. Furthermore, all outcomes in this event are mutually exclusive. Then,
PX (k) = Pr(Ak ) = (# of outcomes in Ak )*(probability of each outcome in Ak
(2.37)
The number of outcomes in the event Ak is just the number of combinations of n objects taken k at a time. Referring to Theorem 2.7, this is the binomial coefficient,
As a check, we verify that this probability mass function is properly normalized:
(2.39)
In the above calculation, we have used the binomial expansion
(2.40)
Binomial random variables occur, in practice, any time Bernoulli trials are repeated. For example, in a digital communication system, a packet of n bits may be transmitted and we might be interested in the number of bits in the packet that are received in error. Or, perhaps a bank manager might be interested in the number of tellers that are serving customers at a given point in time. Similarly, a medical technician might want to know how many cells from a blood sample are white and how many are red. In Example 2.23, the coin tossing experiment was repeated n times and the random variable Y represented the number of times tails occurred in the sequence of n tosses. This is a repetition of a Bernoulli trial, and hence the random variable Y should be a binomial random variable with p = 1/2 (assuming the coin is fair).
C. Poisson Random Variable
Consider a binomial random variable, X, where the number of repeated trials, n, is very large. In that case, evaluating the binomial coefficients can pose numerical problems. If the probability of success in each individual trial, p, is very small, then the binomial random variable can be well approximated by a Poisson random variable. That is, the Poisson random variable is a limiting case of the binomial random variable. Formally, let n approach infinity and p approach 0 in such a way that . Then, the binomial probability mass function converges to the form
(2.41)
which is the probability mass function of a Poisson random variable. We see that the Poisson random variable is properly normalized by noting that
(2.42)
(see Equation E.14 in Appendix E). The Poisson random variable is extremely important as it describes the behavior of many physical phenomena. It is commonly used in queuing theory and in communication networks. The number of customers arriving at a cashier in a store during some time interval may be well modeled as a Poisson random variable as may the number of data packets arriving at a node in a computer network. We will see increasingly in later chapters that the Poisson random variable plays a fundamental role in our development of a probabilistic description of noise.
D. Geometric Random Variable
Consider repeating a Bernoulli trial until the first occurrence of the outcome ξ0. If X represents the number of times the outcome ξ1 occurs before the first occurrence of ξ0, then X is a geometric random variable whose probability mass function is
(2.43)
We might also formulate the geometric random variable in a slightly different way. Suppose X counted the number of trials that were performed until the first occurrence of ξ0. Then, the probability mass function would take on the form,
(2.44)
The geometric random variable can also be generalized to the case where the outcome ξ0 must occur exactly m times. That is, the generalized geometric random variable counts the number of Bernoulli trials that must be repeated until the mth occurrence of the outcome ξ0. We can derive the form of the probability mass function for the generalized geometric random variable from what we know about binomial random variables. For the mth occurrence of ξ0 to occur on the £th trial, the first k−1 trials must have had m−1 occurrences of ξ0 and k−m occurrences of ξ1. Then
Px (k) = Pr({((m − 1) occurrences of ξ0 in k − 1) trials } ∩ {ξ0 occurs on the kth trial}
(2.45)
This generalized geometric random variable sometimes goes by the name of a Pascal random variable or the negative binomial random variable.
Of course, one can define many other random variables and develop the associated probability mass functions. We have chosen to introduce some of the more important discrete random variables here. In the next chapter, we will introduce some continuous random variables and the appropriate probabilistic descriptions of these random variables. However, to close out this chapter, we provide a section showing how some of the material covered herein can be used in at least one engineering application.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500059
Introduction
Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011
1.2.3 Moments and Expected Values
If X is a discrete random variable, then its m th moment is given by
(1.6)
[where the xi are specified in (1.1)], provided that the infinite sum converges absolutely. Where the infinite sum diverges, the moment is said not to exist. If X is a continuous random variable with probability density function f (x), then its mth moment is given by
(1.7)
provided that this integral converges absolutely.
The first moment, corresponding to m = 1, is commonly called the mean or expected value of X and written mX or μ X . The m th central moment of X is defined as the m th moment of the random variable X − μX , provided that μX exists. The first central moment is zero. The second central moment is called the variance of X and written or Var[X]. We have the equivalent formulas Var[X] = E [(X − μ)2] = E [X 2] − μ2.
The median of a random variable X is any value v with the property that
If X is a random variable and g is a function, then Y = g (X) is also a random variable. If X is a discrete random variable with possible values x 1, x 2, …, then the expectation of g (X) is given by
(1.8)
provided that the sum converges absolutely. If X is continuous and has the probability density function fX, then the expected value of g (X) is evaluated from
(1.9)
The general formula, covering both the discrete and continuous cases, is
(1.10)
where FX is the distribution function of the random variable X. Technically speaking, the integral in (1.10) is a Lebesgue–Stieltjes integral. We do not require knowledge of such integrals in this text, but interpret (1.10) to signify (1.8) when X is a discrete random variable, and to represent (1.9) when X possesses a probability density fX.
Let FY (y) = Pr{Y ≤ y} denote the distribution function for Y = g (X). When X is a discrete random variable, then
if yi = g (xi ) and provided that the second sum converges absolutely. In general,
(1.11)
If X is a discrete random variable, then so is Y = g (X). It may be, however, that X is a continuous random variable, while Y is discrete (the reader should provide an example). Even so, one may compute E [Y] from either form in (1.11) with the same result.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123814166000010
RANDOM VARIABLES AND EXPECTATION
Sheldon M. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009
EXAMPLE 4.2a
Consider a random variable X that is equal to 1,2, or 3. If we know that
then it follows (since p (1) + p (2) + p (3) = 1) that
A graph of p(x) is presented in Figure 4.1.
The cumulative distribution function F can be expressed in terms of p(x) by
If X is a discrete random variable whose set of possible values are x 1, x 2, x 3,…, where x 1 < x 2 < x 3 <…, then its distribution function F is a step function. That is, the value of F is constant in the intervals [xi-1 , xi ) and then takes a step (or jump) of size p(xi ) at xi .
For instance, suppose X has a probability mass function given (as in EXAMPLE 4.2a) by
Then the cumulative distribution function F of X is given by
This is graphically presented in Figure 4.2.
Whereas the set of possible values of a discrete random variable is a sequence, we often must consider random variables whose set of possible values is an interval. Let X be such a random variable. We say that X is a continuous random variable if there exists a nonnegative function f (x), defined for all real x ɛ (-∞,∞), having the property that for any set B of real numbers
(4.2.1)
The function f (x) is called the probability density function of the random variable X .
In words, Equation 4.2.1 states that the probability that X will be in B may be obtained by integrating the probability density function over the set B. Since X must assume some value, f(x) must satisfy
All probability statements about X can be answered in terms of f (x). For instance, letting B =[a, b ], we obtain from Equation 4.2.1 that
(4.2.2)
If we let a = b in the above, then
In words, this equation states that the probability that a continuous random variable will assume any particular value is zero. (See Figure 4.3.)
The relationship between the cumulative distribution F (.) and the probability density f (.) is expressed by
Differentiating both sides yields
That is, the density is the derivative of the cumulative distribution function. A somewhat more intuitive interpretation of the density function may be obtained from Equation 4.2.2 as follows:
when ε is small. In other words, the probability that X will be contained in an interval of length ε around the point a is approximately ε f (a). From this, we see that f (a) is a measure of how likely it is that the random variable will be near a .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123704832000096
faithfulyeatcheed.blogspot.com
Source: https://www.sciencedirect.com/topics/mathematics/discrete-random-variable