On the design of Bayesian loss function for constant Classification
On the design of Bayesian loss function for constant Classification Video Clips. Duration : 67.72 Mins.
Google Tech Talk July 30, 2010 ABSTRACT Masnada-Hamed Shirazi presented. The problem of machine learning classifier design is examined from the standpoint of probability in statistical investigation. This shows that the standard approach is based on the specifications of a loss, minimizing the conditional risk is too restrictive. And 'demonstrated that a better alternative is to start by specifying a functional form for the conditional minimum risk, and the consequent lossFunction. This show has several consequences of practical interest, for example, that: 1) the widespread loss of convex functions is not necessary, and 2) many new losses for classification problems are derived. These points are determined by the derivation of new losses, which was not convex, but does not affect the computational tractability of the classifier and the design is robust to contamination of data with outliers. It is claimed that this robustness requiresLoss functions to penalize both large positive and negative margins. In addition, the link between risk reduction and the likelihood is guaranteed to collect the cost-sensitive setting in a manner consistent with the cost-sensitive Bayesian risk, and that is associated with the Bayes decision rule and a new method for studying cost-sensitive classifiers proposed extended. Masnada-Hamed Shirazi is a doctoral student at the Statistical Visual Computing Lab at the University of California at San Diego. He…
Tags: google tech talks, machine learning, machine vision