Home > Error Rate > Bayes Error Rate Definition

Bayes Error Rate Definition

Contents

Figure 4.13: Two bivariate normal distributions, whose priors are exactly the same. As a second simplification, assume that the variance of colours is the same is the variance of weights. So the covariance matrix would have identical diagonal elements, but the off-diagonal element would be a strictly positive number representing the covariance of x and y (see Figure 4.11). However, the clusters of each class are of equal size and shape and are still centered about the mean for that class. his comment is here

Case 2: Another simple case arises when the covariance matrices for all of the classes are identical but otherwise arbitrary. This loss function is so called symetrical or zero-one loss function is given as Therefore, the decision bondary is exactly at the midpoint between the two means. More generally, we assume that there is some prior probability P(w1) that the next fish is sea bass, and some prior probability P(w2) that it is salmon.

Bayes Error Rate In R

Geometrically, this corresponds to the situation in which the samples fall in equal-size hyperspherical clusters, the cluster for the ith class being centered about the mean vector mi (see Figure 4.12). When this happens, the optimum decision rule can be stated very simply: the decision rule is based entirely on the distance from the feature vector x to the different mean vectors. For the problem above, it corresponds to volumes of following regions You can integrate two pieces separately using some numerical integration package.

Figure 4.24: Example of straight decision surface. The contour lines are stretched out in the x direction to reflect the fact that the distance spreads out at a lower rate in the x direction than it does in The analog to the Cauchy-Schwarz inequality comes from recognizing that if w is any d-dimensional vector, then the variance of wTx can never be negative. Bit Error Rate Definition If the prior probabilities are equal then x0 is halfway between the means.

For notational simplicity, let lij=l(ai|wj) be the loss incurred for deciding wi, when the true state of nature is wj. Bayes Error Rate Example Generated Sat, 01 Oct 2016 20:08:34 GMT by s_hv972 (squid/3.5.20) Then this boundary can be written as:        Note though, that the direction of the decision boundary is orthogonal to this vector, and so the direction of the decision boundary is given by: Now consider what happens to

When was this language released? Symbol Error Rate Definition Force Microsoft Word to NEVER auto-capitalize the name of my company Multiplication by One Circle Font Awesome Icons more hot questions question feed about us tour help blog chat data legal As a concrete example, consider two Gaussians with following parameters $$\mu_1=\left(\begin{matrix} -1\\\\ -1 \end{matrix}\right), \mu_2=\left(\begin{matrix} 1\\\\ 1 \end{matrix}\right)$$ $$\Sigma_1=\left(\begin{matrix} 2&1/2\\\\ 1/2&2 \end{matrix}\right),\ \Sigma_2=\left(\begin{matrix} 1&0\\\\ 0&1 \end{matrix}\right)$$ Bayes optimal classifier boundary will If the prior probabilities P(wi) are the same for all c classes, then the ln P(wi) term becomes another unimportant additive constant that can be ignored.

Bayes Error Rate Example

If errors are to be avoided it is natural to seek a decision rule, that minimizes the probability of error, that is the error rate. Given the covariance matrix S of a Gaussian distribution, the eigenvectors of S are the principal directions of the distribution, and the eigenvalues are the variances of the corresponding principal directions. Bayes Error Rate In R Allowing the use of more than one feature merely requires replacing the scalar x by the feature vector x, where x is in a d-dimensional Euclidean space Rd called the feature Optimal Bayes Error Rate Rearranging these leads us to the answer to our question, which is called Bayes for­mula:

Because P(wj|x) is the probability that the true state of nature is wj, the expected loss associated with taking action ai is this content As with the univariate density, samples from a normal population tend to fall in a single cloud or cluster centered about the mean vector, and the shape of the cluster depends For example, if we were trying to recognize an apple from an orange, and we measured the colour and the weight as our feature vector, then chances are that there is Intstead, the boundary line will be tilted depending on how the 2 features covary and their respective variances (see Figure 4.19). Naive Bayes Classifier Error Rate

One of the most useful is in terms of a set of discriminant functions gi(x), i=1,…,c. Moreover, in some problems it enables us to predict the error we will get when we generalize to novel patterns. The reason that the distance decreases slower in the x direction is because the variance for x is greater and thus a point that is far away in the x direction http://sovidi.com/error-rate/bit-communication-definition-digital-error-rate.php Please try the request again.

As an example of a classification involving discrete features, consider two categry case with x=(x1… xd), where the components xi are either 0 or 1, and with probabilities pi=Pr[xi=1| w1] Bayesian Error Rate For this reason, the decision bondary is tilted. This case assumes that the covariance matrix for each class is arbitrary.

Although the decision boundary is a parallel line, it has been shifted away from the more likely class.

While this sort of stiuation rarely occurs in practice, it permits us to determine the optimal (Bayes) classifier against which we can compare all other classifiers. If a general decision rule a(x) tells us which action to take for every possible observation x, the overall risk R is given by In order to keep things simple, assume also that this arbitrary covariance matrix is the same for each class wi. How To Calculate Bayes Error Rate This leads to the requirement that the quadratic form wTSw never be negative.

But since w= then the hyperplane which seperates Ri and Rj is orthogonal to the line that links their means. If this is true for some class i then the covariance matrix for that class will have identical diagonal elements. In any event it's helpful to place cross-references between closely related questions to help people connect them easily. –whuber♦ Nov 26 '10 at 20:52 add a comment| 3 Answers 3 active check over here If we define F to be the matrix whose columns are the orthonormal eigenvectors of S, and L the diagonal matrix of the corresponding eigenvalues, then the transformation A=FL-1/2 applied to

This is the class-conditional probability density (state-conditional probability density) function, the probabil­ity density function for x given that the state of nature is in w. Could you please provide commands to reproduce your beautiful figures? –Andrej Oct 5 '12 at 13:42 2 (+1) These graphics are beautiful. –COOLSerdash Jun 25 '13 at 7:05 add a The decision regions vary in their shapes and do not need to be connected. Therefore, in expanded form we have           

To classify a feature vector x, measure the Euclidean distance from each x to each of the c mean vectors, and assign x to the category of the nearest mean. If the variables xi and xj are statistically independent, the covariances are zero, and the covariance matrix is diagonal. Figure 4.15: As the priors change, the decision boundary throught point x0 shifts away from the more common class mean (one dimensional Gaussian distributions). Instead, it is is tilted so that its points are of equal distance to the contour lines in w1 and those in w2.

This is because identical covariance matrices imply that the two classes have identically shaped clusters about their mean vectors.                                                                         For the minimum error-rate case, we can simplify things further by taking gi(x)= P(wi|x), so that the maximum discriminant function corresponds to the maximum posterior probability. If all the off-diagonal elements are zero, p(x) reduces to the product of the univariate normal densities for the components of x. As in case 1, a line through the point x0 defines this decision boundary between Ri and Rj.

Even in one dimension, for arbitrary variance the decision regions need not be simply connected (Figure 4.20). The two-dimensional examples with different decision boundaries are shown in Figure 4.23, Figure 4.24, and in Figure 4.25. For the problem above I get 0.253579 using following Mathematica code dens1[x_, y_] = PDF[MultinormalDistribution[{-1, -1}, {{2, 1/2}, {1/2, 2}}], {x, y}]; dens2[x_, y_] = PDF[MultinormalDistribution[{1, 1}, {{1, 0}, {0, 1}}],