收藏 分享(赏)

数据挖掘课件.ppt

上传人:微传9988 文档编号:2530799 上传时间:2018-09-21 格式:PPT 页数:161 大小:1.30MB
下载 相关 举报
数据挖掘课件.ppt_第1页
第1页 / 共161页
数据挖掘课件.ppt_第2页
第2页 / 共161页
数据挖掘课件.ppt_第3页
第3页 / 共161页
数据挖掘课件.ppt_第4页
第4页 / 共161页
数据挖掘课件.ppt_第5页
第5页 / 共161页
点击查看更多>>
资源描述

1、Chapter 3: Supervised Learning,CS583, Bing Liu, UIC,2,Road Map,Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Nave Bayesian classification Nave Bayes for text classification Support vector machines K-nearest neighbor Ensemble me

2、thods: Bagging and Boosting Summary,CS583, Bing Liu, UIC,3,An example application,An emergency room in a hospital measures 17 variables (e.g., blood pressure, age, etc) of newly admitted patients. A decision is needed: whether to put a new patient in an intensive-care unit. Due to the high cost of I

3、CU, those patients who may survive less than a month are given higher priority. Problem: to predict high-risk patients and discriminate them from low-risk patients.,CS583, Bing Liu, UIC,4,Another application,A credit card company receives thousands of applications for new cards. Each application con

4、tains information about an applicant, age Marital status annual salary outstanding debts credit rating etc. Problem: to decide whether an application should approved, or to classify applications into two categories, approved and not approved.,CS583, Bing Liu, UIC,5,Machine learning and our focus,Lik

5、e human learning from past experiences. A computer does not have “experiences”. A computer system learns from data, which represent some “past experiences” of an application domain. Our focus: learn a target function that can be used to predict the values of a discrete class attribute, e.g., approve

6、 or not-approved, and high-risk or low risk. The task is commonly called: Supervised learning, classification, or inductive learning.,CS583, Bing Liu, UIC,6,Data: A set of data records (also called examples, instances or cases) described by k attributes: A1, A2, Ak. a class: Each example is labelled

7、 with a pre-defined class. Goal: To learn a classification model from the data that can be used to predict the classes of new (future, or test) cases/instances.,The data and the goal,CS583, Bing Liu, UIC,7,An example: data (loan application),Approved or not,CS583, Bing Liu, UIC,8,An example: the lea

8、rning task,Learn a classification model from the data Use the model to classify future loan applications into Yes (approved) and No (not approved) What is the class for following case/instance?,CS583, Bing Liu, UIC,9,Supervised vs. unsupervised Learning,Supervised learning: classification is seen as

9、 supervised learning from examples. Supervision: The data (observations, measurements, etc.) are labeled with pre-defined classes. It is like that a “teacher” gives the classes (supervision). Test data are classified into these classes too. Unsupervised learning (clustering) Class labels of the data

10、 are unknown Given a set of data, the task is to establish the existence of classes or clusters in the data,CS583, Bing Liu, UIC,10,Supervised learning process: two steps,Learning (training): Learn a model using the training data Testing: Test the model using unseen test data to assess the model acc

11、uracy,CS583, Bing Liu, UIC,11,What do we mean by learning?,Given a data set D, a task T, and a performance measure M, a computer system is said to learn from D to perform the task T if after learning the systems performance on T improves as measured by M. In other words, the learned model helps the

12、system to perform T better as compared to no learning.,CS583, Bing Liu, UIC,12,An example,Data: Loan application data Task: Predict whether a loan should be approved or not. Performance measure: accuracy.No learning: classify all future applications (test data) to the majority class (i.e., Yes): Acc

13、uracy = 9/15 = 60%. We can do better than 60% with learning.,CS583, Bing Liu, UIC,13,Fundamental assumption of learning,Assumption: The distribution of training examples is identical to the distribution of test examples (including future unseen examples). In practice, this assumption is often violat

14、ed to certain degree. Strong violations will clearly result in poor classification accuracy. To achieve good accuracy on the test data, training examples must be sufficiently representative of the test data.,CS583, Bing Liu, UIC,14,Road Map,Basic concepts Decision tree induction Evaluation of classi

15、fiers Rule induction Classification using association rules Nave Bayesian classification Nave Bayes for text classification Support vector machines K-nearest neighbor Ensemble methods: Bagging and Boosting Summary,CS583, Bing Liu, UIC,15,Introduction,Decision tree learning is one of the most widely

16、used techniques for classification. Its classification accuracy is competitive with other methods, and it is very efficient. The classification model is a tree, called decision tree. C4.5 by Ross Quinlan is perhaps the best known system. It can be downloaded from the Web.,CS583, Bing Liu, UIC,16,The

17、 loan data (reproduced),Approved or not,CS583, Bing Liu, UIC,17,A decision tree from the loan data,Decision nodes and leaf nodes (classes),CS583, Bing Liu, UIC,18,Use the decision tree,No,CS583, Bing Liu, UIC,19,Is the decision tree unique?,No. Here is a simpler tree. We want smaller tree and accura

18、te tree.Easy to understand and perform better.,Finding the best tree is NP-hard. All current tree building algorithms are heuristic algorithms,CS583, Bing Liu, UIC,20,From a decision tree to a set of rules,A decision tree can be converted to a set of rules Each path from the root to a leaf is a rule

19、.,CS583, Bing Liu, UIC,21,Algorithm for decision tree learning,Basic algorithm (a greedy divide-and-conquer algorithm) Assume attributes are categorical now (continuous attributes can be handled too) Tree is constructed in a top-down recursive manner At start, all the training examples are at the ro

20、ot Examples are partitioned recursively based on selected attributes Attributes are selected on the basis of an impurity function (e.g., information gain) Conditions for stopping partitioning All examples for a given node belong to the same class There are no remaining attributes for further partiti

21、oning majority class is the leaf There are no examples left,CS583, Bing Liu, UIC,22,Decision tree learning algorithm,CS583, Bing Liu, UIC,23,Choose an attribute to partition data,The key to building a decision tree - which attribute to choose in order to branch. The objective is to reduce impurity o

22、r uncertainty in data as much as possible. A subset of data is pure if all instances belong to the same class. The heuristic in C4.5 is to choose the attribute with the maximum Information Gain or Gain Ratio based on information theory.,CS583, Bing Liu, UIC,24,The loan data (reproduced),Approved or

23、not,CS583, Bing Liu, UIC,25,Two possible roots, which is better?,Fig. (B) seems to be better.,CS583, Bing Liu, UIC,26,Information theory,Information theory provides a mathematical basis for measuring the information content. To understand the notion of information, think about it as providing the an

24、swer to a question, for example, whether a coin will come up heads. If one already has a good guess about the answer, then the actual answer is less informative. If one already knows that the coin is rigged so that it will come with heads with probability 0.99, then a message (advanced information)

25、about the actual outcome of a flip is worth less than it would be for a honest coin (50-50).,CS583, Bing Liu, UIC,27,Information theory (cont ),For a fair (honest) coin, you have no information, and you are willing to pay more (say in terms of $) for advanced information - less you know, the more va

26、luable the information. Information theory uses this same intuition, but instead of measuring the value for information in dollars, it measures information contents in bits. One bit of information is enough to answer a yes/no question about which one has no idea, such as the flip of a fair coin,CS58

27、3, Bing Liu, UIC,28,Information theory: Entropy measure,The entropy formula,Pr(cj) is the probability of class cj in data set D We use entropy as a measure of impurity or disorder of data set D. (Or, a measure of information in a tree),CS583, Bing Liu, UIC,29,Entropy measure: let us get a feeling,As

28、 the data become purer and purer, the entropy value becomes smaller and smaller. This is useful to us!,CS583, Bing Liu, UIC,30,Information gain,Given a set of examples D, we first compute its entropy:If we make attribute Ai, with v values, the root of the current tree, this will partition D into v s

29、ubsets D1, D2 , Dv . The expected entropy if Ai is used as the current root:,CS583, Bing Liu, UIC,31,Information gain (cont ),Information gained by selecting attribute Ai to branch or to partition the data is We choose the attribute with the highest gain to branch/split the current tree.,CS583, Bing

30、 Liu, UIC,32,An example,Own_house is the best choice for the root.,CS583, Bing Liu, UIC,33,We build the final tree,We can use information gain ratio to evaluate the impurity as well (see the handout),CS583, Bing Liu, UIC,34,Handling continuous attributes,Handle continuous attribute by splitting into

31、 two intervals (can be more) at each node. How to find the best threshold to divide? Use information gain or gain ratio again Sort all the values of an continuous attribute in increasing order v1, v2, , vr, One possible threshold between two adjacent values vi and vi+1. Try all possible thresholds a

32、nd find the one that maximizes the gain (or gain ratio).,CS583, Bing Liu, UIC,35,An example in a continuous space,CS583, Bing Liu, UIC,36,Avoid overfitting in classification,Overfitting: A tree may overfit the training data Good accuracy on training data but poor on test data Symptoms: tree too deep

33、 and too many branches, some may reflect anomalies due to noise or outliers Two approaches to avoid overfitting Pre-pruning: Halt tree construction early Difficult to decide because we do not know what may happen subsequently if we keep growing the tree. Post-pruning: Remove branches or sub-trees fr

34、om a “fully grown” tree. This method is commonly used. C4.5 uses a statistical method to estimates the errors at each node for pruning. A validation set may be used for pruning as well.,CS583, Bing Liu, UIC,37,An example,Likely to overfit the data,CS583, Bing Liu, UIC,38,Other issues in decision tre

35、e learning,From tree to rules, and rule pruning Handling of miss values Handing skewed distributions Handling attributes and classes with different costs. Attribute construction Etc.,CS583, Bing Liu, UIC,39,Road Map,Basic concepts Decision tree induction Evaluation of classifiers Rule induction Clas

36、sification using association rules Nave Bayesian classification Nave Bayes for text classification Support vector machines K-nearest neighbor Ensemble methods: Bagging and Boosting Summary,CS583, Bing Liu, UIC,40,Evaluating classification methods,Predictive accuracyEfficiency time to construct the m

37、odel time to use the model Robustness: handling noise and missing values Scalability: efficiency in disk-resident databases Interpretability: understandable and insight provided by the model Compactness of the model: size of the tree, or the number of rules.,CS583, Bing Liu, UIC,41,Evaluation method

38、s,Holdout set: The available data set D is divided into two disjoint subsets, the training set Dtrain (for learning a model) the test set Dtest (for testing the model) Important: training set should not be used in testing and the test set should not be used in learning. Unseen test set provides a un

39、biased estimate of accuracy. The test set is also called the holdout set. (the examples in the original data set D are all labeled with classes.) This method is mainly used when the data set D is large.,CS583, Bing Liu, UIC,42,Evaluation methods (cont),n-fold cross-validation: The available data is

40、partitioned into n equal-size disjoint subsets. Use each subset as the test set and combine the rest n-1 subsets as the training set to learn a classifier. The procedure is run n times, which give n accuracies. The final estimated accuracy of learning is the average of the n accuracies. 10-fold and

41、5-fold cross-validations are commonly used. This method is used when the available data is not large.,CS583, Bing Liu, UIC,43,Evaluation methods (cont),Leave-one-out cross-validation: This method is used when the data set is very small. It is a special case of cross-validation Each fold of the cross

42、 validation has only a single test example and all the rest of the data is used in training. If the original data has m examples, this is m-fold cross-validation,CS583, Bing Liu, UIC,44,Evaluation methods (cont),Validation set: the available data is divided into three subsets, a training set, a vali

43、dation set and a test set. A validation set is used frequently for estimating parameters in learning algorithms. In such cases, the values that give the best accuracy on the validation set are used as the final parameter values. Cross-validation can be used for parameter estimating as well.,CS583, B

44、ing Liu, UIC,45,Classification measures,Accuracy is only one measure (error = 1-accuracy). Accuracy is not suitable in some applications. In text mining, we may only be interested in the documents of a particular topic, which are only a small portion of a big document collection. In classification i

45、nvolving skewed or highly imbalanced data, e.g., network intrusion and financial fraud detections, we are interested only in the minority class. High accuracy does not mean any intrusion is detected. E.g., 1% intrusion. Achieve 99% accuracy by doing nothing. The class of interest is commonly called

46、the positive class, and the rest negative classes.,CS583, Bing Liu, UIC,46,Precision and recall measures,Used in information retrieval and text classification. We use a confusion matrix to introduce them.,CS583, Bing Liu, UIC,47,Precision and recall measures (cont),Precision p is the number of corre

47、ctly classified positive examples divided by the total number of examples that are classified as positive. Recall r is the number of correctly classified positive examples divided by the total number of actual positive examples in the test set.,CS583, Bing Liu, UIC,48,An example,This confusion matri

48、x gives precision p = 100% and recall r = 1% because we only classified one positive example correctly and no negative examples wrongly. Note: precision and recall only measure classification on the positive class.,CS583, Bing Liu, UIC,49,F1-value (also called F1-score),It is hard to compare two cla

49、ssifiers using two measures. F1 score combines precision and recall into one measureThe harmonic mean of two numbers tends to be closer to the smaller of the two. For F1-value to be large, both p and r much be large.,CS583, Bing Liu, UIC,50,Another evaluation method: Scoring and ranking,Scoring is r

50、elated to classification. We are interested in a single class (positive class), e.g., buyers class in a marketing database. Instead of assigning each test instance a definite class, scoring assigns a probability estimate (PE) to indicate the likelihood that the example belongs to the positive class.,

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 中等教育 > 小学课件

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报