收藏 分享(赏)

数据挖掘导论第4课数据分类和预测.ppt

上传人:精品资料 文档编号:11276254 上传时间:2020-03-08 格式:PPT 页数:44 大小:426.50KB
下载 相关 举报
数据挖掘导论第4课数据分类和预测.ppt_第1页
第1页 / 共44页
数据挖掘导论第4课数据分类和预测.ppt_第2页
第2页 / 共44页
数据挖掘导论第4课数据分类和预测.ppt_第3页
第3页 / 共44页
数据挖掘导论第4课数据分类和预测.ppt_第4页
第4页 / 共44页
数据挖掘导论第4课数据分类和预测.ppt_第5页
第5页 / 共44页
点击查看更多>>
资源描述

1、第4课 数据分类和预测,徐从富,副教授 浙江大学人工智能研究所,浙江大学本科生数据挖掘导论课件,内容提纲,What is classification? What is prediction? Issues regarding classification and prediction Classification by decision tree induction Bayesian Classification Prediction Summary Reference,Classification predicts categorical class labels (discrete or

2、 nominal) classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data Prediction models continuous-valued functions, i.e., predicts unknown or missing values Typical applications Credit approval Target ma

3、rketing Medical diagnosis Fraud detection,Classification vs. Prediction,ClassificationA Two-Step Process,Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for m

4、odel construction is training set The model is represented as classification rules, decision trees, or mathematical formulae Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accu

5、racy rate is the percentage of test set samples that are correctly classified by the model Test set is independent of training set, otherwise over-fitting will occur If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known,Classification Process (1): Mode

6、l Construction,Training Data,Classification Algorithms,IF rank = professor OR years 6 THEN tenured = yes,Classifier (Model),Classification Process (2): Use the Model in Prediction,Classifier,Testing Data,Unseen Data,(Jeff, Professor, 4),Tenured?,Supervised vs. Unsupervised Learning,Supervised learni

7、ng (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of mea

8、surements, observations, etc. with the aim of establishing the existence of classes or clusters in the data,Issues Regarding Classification and Prediction (1): Data Preparation,Data cleaning Preprocess data in order to reduce noise and handle missing values Relevance analysis (feature selection) Rem

9、ove the irrelevant or redundant attributes Data transformation Generalize and/or normalize data,Issues regarding classification and prediction (2): Evaluating classification methods,Accuracy: classifier accuracy and predictor accuracy Speed and scalability time to construct the model (training time)

10、 time to use the model (classification/prediction time) Robustness handling noise and missing values Scalability efficiency in disk-resident databases Interpretability understanding and insight provided by the model Other measures, e.g., goodness of rules, such as decision tree size or compactness o

11、f classification rules,Decision Tree Induction: Training Dataset,This follows an example of Quinlans ID3 (Playing Tennis),Output: A Decision Tree for “buys_computer”,Algorithm for Decision Tree Induction,Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conq

12、uer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., i

13、nformation gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning majority voting is employed for classifying the leaf There are no samples left,Attribute Selection Measure: Information Gain (ID3/C4.

14、5),Select the attribute with the highest information gain S contains si tuples of class Ci for i = 1, , m information measures info required to classify any arbitrary tupleentropy of attribute A with values a1,a2,avinformation gained by branching on attribute A,Attribute Selection by Information Gai

15、n Computation,Class P: buys_computer = “yes” Class N: buys_computer = “no” I(p, n) = I(9, 5) =0.940 Compute the entropy for age:,means “age =30” has 5 out of 14 samples, with 2 yeses and 3 nos. Hence,Similarly,Computing Information-Gain for Continuous-Value Attributes,Let attribute A be a continuous

16、-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point (ai+ai+1)/2 is the midpoint between the values of ai and ai+1 The point with the minimum expected info

17、rmation requirement for A is selected as the split-point for A Split: D1 is the set of tuples in D satisfying A split-point, and D2 is the set of tuples in D satisfying A split-point,Extracting Classification Rules from Trees,Represent the knowledge in the form of IF-THEN rules One rule is created f

18、or each path from the root to a leaf Each attribute-value pair along a path forms a conjunction The leaf node holds the class prediction Rules are easier for humans to understand Example IF age = “40” AND credit_rating = “excellent” THEN buys_computer = “yes” IF age = “=30” AND credit_rating = “fair

19、” THEN buys_computer = “no”,Avoid Overfitting in Classification,Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliers Poor accuracy for unseen samples Two approaches to avoid overfitting Prepruning: Halt tree construction ea

20、rlydo not split a node if this would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold Postpruning: Remove branches from a “fully grown” treeget a sequence of progressively pruned trees Use a set of data different from the training data to decide w

21、hich is the “best pruned tree”,Approaches to Determine the Final Tree Size,Separate training (2/3) and testing (1/3) sets Use cross validation Use all the data for training but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve the entire distribu

22、tion ,Enhancements to Basic Decision Tree Induction,Allow for continuous-valued attributes Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals Handle missing attribute values Assign the most common value of the attribute As

23、sign probability to each of the possible values Attribute construction Create new attributes based on existing ones that are sparsely represented This reduces fragmentation, repetition, and replication,Classification in Large Databases,Classificationa classical problem extensively studied by statist

24、icians and machine learning researchers Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed Why decision tree induction in data mining? relatively faster learning speed (than other classification methods) convertible to simple and easy to und

25、erstand classification rules can use SQL queries for accessing databases comparable classification accuracy with other methods,Scalable Decision Tree Induction Methods,SLIQ (EDBT96 Mehta et al.) builds an index for each attribute and only class list and the current attribute list reside in memory SP

26、RINT (VLDB96 J. Shafer et al.) constructs an attribute list data structure PUBLIC (VLDB98 Rastogi & Shim) integrates tree splitting and tree pruning: stop growing the tree earlier RainForest (VLDB98 Gehrke, Ramakrishnan & Ganti) separates the scalability aspects from the criteria that determine the

27、quality of the tree builds an AVC-list (attribute, value, class label),Presentation of Classification Results,Visualization of a Decision Tree in SGI/MineSet 3.0,Interactive Visual Mining by Perception-Based Classification (PBC),Bayesian Classification: Why?,Probabilistic learning: Calculate explici

28、t probabilities for hypothesis, among the most practical approaches to certain types of learning problems Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data. Probabilistic prediction:

29、 Predict multiple hypotheses, weighted by their probabilities Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured,Bayesian Theorem: Basics,Let X be a data sample whose class label is

30、unknown Let H be a hypothesis that X belongs to class C For classification problems, determine P(H|X): the probability that the hypothesis holds given the observed data sample X P(H): prior probability of hypothesis H (i.e. the initial probability before we observe any data, reflects the background

31、knowledge) P(X): probability that sample data is observed P(X|H): probability of observing the sample X, given that the hypothesis holds,Bayesian Theorem,Given training data X, posteriori probability of a hypothesis H, P(H|X) follows the Bayes theoremInformally, this can be written as posteriori = l

32、ikelihood x prior / evidence MAP (maximum posteriori) hypothesisPractical difficulty: require initial knowledge of many probabilities, significant computational cost,Nave Bayes Classifier,A simplified assumption: attributes are conditionally independent:The product of occurrence of say 2 elements x1

33、 and x2, given the current class is C, is the product of the probabilities of each element taken separately, given the same class P(y1,y2, C) = P(y1, C) * P(y2, C) No dependence relation between attributes Greatly reduces the computation cost, only count the class distribution. Once the probability

34、P(X|Ci) is known, assign X to the class with maximum P(X|Ci) * P(Ci),Training dataset,Class: C1:buys_computer= yes C2:buys_computer= noData sample X =(age=30, Income=medium, Student=yes Credit_rating= Fair),Nave Bayesian Classifier: An Example,Compute P(X|Ci) for each classP(age=“30” | buys_computer

35、=“yes”) = 2/9=0.222P(age=“30” | buys_computer=“no”) = 3/5 =0.6P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4P(student=“yes” | buys_computer=“yes)= 6/9 =0.667P(student=“yes” | buys_computer=“no”)= 1/5=0.2P(credit_rating=“fair” | buys_computer=“

36、yes”)=6/9=0.667P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4X=(age=30 , income =medium, student=yes, credit_rating=fair)P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.0.667 =0.044P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(

37、buys_computer=“yes”)=0.028P(X|buys_computer=“no”) * P(buys_computer=“no”)=0.007Therefore, X belongs to class “buys_computer=yes”,Nave Bayesian Classifier: Comments,Advantages Easy to implement Good results obtained in most of the cases Disadvantages Assumption: class conditional independence, theref

38、ore loss of accuracy Practically, dependencies exist among variables E.g., hospitals: patients: Profile: age, family history etc Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc Dependencies among these cannot be modeled by Nave Bayesian Classifier How to deal with these dependencies?

39、 Bayesian Belief Networks,Bayesian Belief Networks,Bayesian belief network allows a subset of the variables conditionally independent A graphical model of causal relationships Represents dependency among the variables Gives a specification of joint probability distribution,X,Nodes: random variables

40、Links: dependency X,Y are the parents of Z, and Y is the parent of P No dependency between Z and P Has no loops or cycles,Bayesian Belief Network: An Example,Family History,LungCancer,PositiveXRay,Smoker,Emphysema,Dyspnea,LC,LC,(FH, S),(FH, S),(FH, S),(FH, S),0.8,0.2,0.5,0.5,0.7,0.3,0.1,0.9,Bayesian

41、 Belief Networks,The conditional probability table for the variable LungCancer: Shows the conditional probability for each possible combination of its parents,Learning Bayesian Networks,Several cases Given both the network structure and all variables observable: learn only the CPTs Network structure

42、 known, some hidden variables: method of gradient descent, analogous to neural network learning Network structure unknown, all variables observable: search through the model space to reconstruct graph topology Unknown structure, all hidden variables: no good algorithms known for this purpose D. Heck

43、erman, Bayesian networks for data mining,What Is Prediction?,(Numerical) prediction is similar to classification construct a model use model to predict continuous or ordered value for a given input Prediction is different from classification Classification refers to predict categorical class label P

44、rediction models continuous-valued functions Major method for prediction: regression model the relationship between one or more independent or predictor variables and a dependent or response variable Regression analysis Linear and multiple regression Non-linear regression Other regression methods: g

45、eneralized linear model, Poisson regression, log-linear models, regression trees,Linear Regression,Linear regression: involves a response variable y and a single predictor variable x,y = w0 + w1x where w0 (y-intercept) and w1 (slope) are regression coefficients Method of least squares: estimates the

46、 best-fitting straight lineMultiple linear regression: involves more than one predictor variable Training data is of the form (X1, y1), (X2, y2), (X|D|, y|D|) Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2 Solvable by extension of least square method or using SAS, S-Plus Many nonlinear functio

47、ns can be transformed into the above,Some nonlinear models can be modeled by a polynomial function A polynomial regression model can be transformed into linear regression model. For example, y = w0 + w1 x + w2 x2 + w3 x3 convertible to linear with new variables: x2 = x2, x3= x3 y = w0 + w1 x + w2 x2

48、 + w3 x3 Other functions, such as power function, can also be transformed to linear model Some models are intractable nonlinear (e.g., sum of exponential terms) possible to obtain least square estimates through extensive calculation on more complex formulae,Nonlinear Regression,Generalized linear mo

49、del: Foundation on which linear regression can be applied to modeling categorical response variables Variance of y is a function of the mean value of y, not a constant Logistic regression: models the prob. of some event occurring as a linear function of a set of predictor variables Poisson regression: models the data that exhibit a Poisson distribution Log-linear models: (for categorical data) Approximate discrete multidimensional prob. distributions Also useful for data compression and smoothing Regression trees and model trees Trees to predict continuous values rather than class labels,

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 企业管理 > 管理学资料

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报