Decision-theoretic rough sets

In the mathematical theory of decisions, decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set classification. First created in 1990 by Dr. Yiyu Yao,[1] the extension makes use of loss functions to derive and region parameters. Like rough sets, the lower and upper approximations of a set are used.

Definitions

The following contains the basic principles of decision-theoretic rough sets.

Conditional risk

Using the Bayesian decision procedure, the decision-theoretic rough set (DTRS) approach allows for minimum-risk decision making based on observed evidence. Let be a finite set of possible actions and let be a finite set of states. is calculated as the conditional probability of an object being in state given the object description . denotes the loss, or cost, for performing action when the state is . The expected loss (conditional risk) associated with taking action is given by:

Object classification with the approximation operators can be fitted into the Bayesian decision framework. The set of actions is given by , where , , and represent the three actions in classifying an object into POS(), NEG(), and BND() respectively. To indicate whether an element is in or not in , the set of states is given by . Let denote the loss incurred by taking action when an object belongs to , and let denote the loss incurred by take the same action when the object belongs to .

Loss functions

Let denote the loss function for classifying an object in into the POS region, denote the loss function for classifying an object in into the BND region, and let denote the loss function for classifying an object in into the NEG region. A loss function denotes the loss of classifying an object that does not belong to into the regions specified by .

Taking individual can be associated with the expected loss actions and can be expressed as:

where , , and , , or .

Minimum-risk decision rules

If we consider the loss functions and , the following decision rules are formulated (P, N, B):

  • P: If and , decide POS();
  • N: If and , decide NEG();
  • B: If , decide BND();

where,

The , , and values define the three different regions, giving us an associated risk for classifying an object. When , we get and can simplify (P, N, B) into (P1, N1, B1):

  • P1: If , decide POS();
  • N1: If , decide NEG();
  • B1: If , decide BND().

When , we can simplify the rules (P-B) into (P2-B2), which divide the regions based solely on :

  • P2: If , decide POS();
  • N2: If , decide NEG();
  • B2: If , decide BND().

Data mining, feature selection, information retrieval, and classifications are just some of the applications in which the DTRS approach has been successfully used.

gollark: I saw that yesterday and SIMILARLY complained that it's not well-defined.
gollark: So if you have an object with the left half in shadow or something, even though a camera sees each side as having *wildly* different colors, you'll just think "oh, that's yellow" or something like that.
gollark: Human color processing isn't measuring something like "what amounts of reddish/greenish/blueish light is falling on this set of cones", it's trying to work out "what object is this and what are the lighting conditions".
gollark: Besides that, you don't perceive colors that way.
gollark: The problem is that what hex code you get out of a picture depends entirely on stuff like lighting and probably camera calibration.

See also

References

  1. Yao, Y.Y.; Wong, S.K.M.; Lingras, P. (1990). "A decision-theoretic rough set model". Methodologies for Intelligent Systems, 5, Proceedings of the 5th International Symposium on Methodologies for Intelligent Systems. Knoxville, Tennessee, USA: North-Holland: 17–25.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.