您好,欢迎来到叨叨游戏网。
搜索
您的当前位置:首页应用scikit-learn做文本分类

应用scikit-learn做文本分类

来源:叨叨游戏网


应用scikit-learn做文本分类

分类: Data Mining Machine Learning Python2014-04-13 20:53 12438人阅读 评论(16) 收藏 举报

20newsgroups文本挖掘Pythonscikitscipy 文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!

嗯,说正文。20newsgroups官网上给出了3个数据集,这里我们用最原始的20news-19997.tar.gz。

分为以下几个过程:

  

加载数据集 提feature 分类

o Naive Bayes o KNN o SVM

 聚类

说明: scipy官网上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。

Environment:Python 2.7 + Scipy (scikit-learn)

1.加载数据集

从20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。 [python] view plaincopy 1. #first extract the 20 news_group dataset to /scikit_learn_data 2. from sklearn.datasets import fetch_20newsgroups 3. #all categories

4. #newsgroup_train = fetch_20newsgroups(subset='train') 5. #part categories

6. categories = ['comp.graphics', 7. 'comp.os.ms-windows.misc', 8. 'comp.sys.ibm.pc.hardware', 9. 'comp.sys.mac.hardware', 10. 'comp.windows.x'];

11. newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categorie

s);

可以检验是否load好了:

[python] view plaincopy

1. #print category names 2. from pprint import pprint

3. pprint(list(newsgroup_train.target_names))

结果:

['comp.graphics',

'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x']

2. 提feature:

刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform

Method 1. HashingVectorizer,规定feature个数

[python] view plaincopy

1. #newsgroup_train.data is the original documents, but we need to extract the

2. #feature vectors inorder to model the text data

3. from sklearn.feature_extraction.text import HashingVectorizer

4. vectorizer = HashingVectorizer(stop_words = 'english',non_negative = True, 5. n_features = 10000)

6. fea_train = vectorizer.fit_transform(newsgroup_train.data) 7. fea_test = vectorizer.fit_transform(newsgroups_test.data); 8. 9.

10. #return feature vector 'fea_train' [n_samples,n_features] 11. print 'Size of fea_train:' + repr(fea_train.shape) 12. print 'Size of fea_train:' + repr(fea_test.shape) 13. #11314 documents, 130107 vectors for all categories 14. print 'The average feature sparsity is {0:.3f}%'.format(

15. fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);

结果:

Size of fea_train:(2936, 10000) Size of fea_train:(1955, 10000) The average feature sparsity is 1.002%

因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w,就是一个相当稀疏的矩阵了。

**************************************************************************************************************************

上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:

Method 2. CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary:

[python] view plaincopy

1. #---------------------------------------------------- 2. #method 1:CountVectorizer+TfidfTransformer

3. print '*************************\\nCountVectorizer+TfidfTransformer\\n********

*****************'

4. from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer

5. count_v1= CountVectorizer(stop_words = 'english', max_df = 0.5); 6. counts_train = count_v1.fit_transform(newsgroup_train.data); 7. print \"the shape of train is \"+repr(counts_train.shape) 8.

9. count_v2 = CountVectorizer(vocabulary=count_v1.vocabulary_); 10. counts_test = count_v2.fit_transform(newsgroups_test.data); 11. print \"the shape of test is \"+repr(counts_test.shape) 12.

13. tfidftransformer = TfidfTransformer(); 14.

15. tfidf_train = tfidftransformer.fit(counts_train).transform(counts_train); 16. tfidf_test = tfidftransformer.fit(counts_test).transform(counts_test);

结果:

*************************

CountVectorizer+TfidfTransformer *************************

the shape of train is (2936, 633) the shape of test is (1955, 633)

Method 3. TfidfVectorizer

让两个TfidfVectorizer共享vocabulary:

[python] view plaincopy

1. #method 2:TfidfVectorizer

2. print '*************************\\nTfidfVectorizer\\n*************************

'

3. from sklearn.feature_extraction.text import TfidfVectorizer 4. tv = TfidfVectorizer(sublinear_tf = True,

5. max_df = 0.5,

6. stop_words = 'english'); 7. tfidf_train_2 = tv.fit_transform(newsgroup_train.data); 8. tv2 = TfidfVectorizer(vocabulary = tv.vocabulary_); 9. tfidf_test_2 = tv2.fit_transform(newsgroups_test.data); 10. print \"the shape of train is \"+repr(tfidf_train_2.shape) 11. print \"the shape of test is \"+repr(tfidf_test_2.shape) 12. analyze = tv.build_analyzer()

13. tv.get_feature_names()#statistical features/terms

结果:

************************* TfidfVectorizer

*************************

the shape of train is (2936, 633) the shape of test is (1955, 633)

此外,还有sklearn里封装好的抓feature函数,fetch_20newsgroups_vectorized

Method 4. fetch_20newsgroups_vectorized

但是这种方法不能挑出几个类的feature,只能全部20个类的feature全部弄出来:

[python] view plaincopy

1. print '*************************\\nfetch_20newsgroups_vectorized\\n***********

**************'

2. from sklearn.datasets import fetch_20newsgroups_vectorized 3. tfidf_train_3 = fetch_20newsgroups_vectorized(subset = 'train'); 4. tfidf_test_3 = fetch_20newsgroups_vectorized(subset = 'test'); 5. print \"the shape of train is \"+repr(tfidf_train_3.data.shape) 6. print \"the shape of test is \"+repr(tfidf_test_3.data.shape)

结果:

************************* fetch_20newsgroups_vectorized *************************

the shape of train is (11314, 130107) the shape of test is (7532, 130107)

3. 分类

3.1 Multinomial Naive Bayes Classifier

见代码&comment,不解释

[python] view plaincopy

1. ###################################################### 2. #Multinomial Naive Bayes Classifier

3. print '*************************\\nNaive Bayes\\n*************************' 4. from sklearn.naive_bayes import MultinomialNB 5. from sklearn import metrics

6. newsgroups_test = fetch_20newsgroups(subset = 'test',

7. categories = categories); 8. fea_test = vectorizer.fit_transform(newsgroups_test.data); 9. #create the Multinomial Naive Bayesian Classifier 10. clf = MultinomialNB(alpha = 0.01)

11. clf.fit(fea_train,newsgroup_train.target); 12. pred = clf.predict(fea_test);

13. calculate_result(newsgroups_test.target,pred);

14. #notice here we can see that f1_score is not equal to 2*precision*recall/(pr

ecision+recall)

15. #because the m_precision and m_recall we get is averaged, however, metrics.f

1_score() calculates

16. #weithed average, i.e., takes into the number of each class into considerati

on.

注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)

其中,函数calculate_result计算f1:

[python] view plaincopy

1. def calculate_result(actual,pred):

2. m_precision = metrics.precision_score(actual,pred); 3. m_recall = metrics.recall_score(actual,pred); 4. print 'predict info:'

5. print 'precision:{0:.3f}'.format(m_precision) 6. print 'recall:{0:0.3f}'.format(m_recall);

7. print 'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred)); 8.

3.2 KNN:

[python] view plaincopy 1. ###################################################### 2. #KNN Classifier

3. from sklearn.neighbors import KNeighborsClassifier

4. print '*************************\\nKNN\\n*************************' 5. knnclf = KNeighborsClassifier()#default with k=5 6. knnclf.fit(fea_train,newsgroup_train.target) 7. pred = knnclf.predict(fea_test);

8. calculate_result(newsgroups_test.target,pred);

3.3 SVM:

[cpp] view plaincopy 1. ###################################################### 2. #SVM Classifier

3. from sklearn.svm import SVC

4. print '*************************\\nSVM\\n*************************' 5. svclf = SVC(kernel = 'linear')#default with 'rbf' 6. svclf.fit(fea_train,newsgroup_train.target) 7. pred = svclf.predict(fea_test);

8. calculate_result(newsgroups_test.target,pred);

结果:

************************* Naive Bayes

************************* predict info: precision:0.7 recall:0.759 f1-score:0.760

************************* KNN

************************* predict info: precision:0.2 recall:0.635 f1-score:0.636

************************* SVM

************************* predict info: precision:0.777 recall:0.774 f1-score:0.774

4. 聚类

[cpp] view plaincopy 1. ###################################################### 2. #KMeans Cluster

3. from sklearn.cluster import KMeans

4. print '*************************\\nKMeans\\n*************************' 5. pred = KMeans(n_clusters=5) 6. pred.fit(fea_test)

7. calculate_result(newsgroups_test.target,pred.labels_);

结果:

************************* KMeans

************************* predict info: precision:0.2 recall:0.226 f1-score:0.213

本文全部代码下载:在此

貌似准确率好低……那我们用全部特征吧……结果如下:

************************* Naive Bayes

************************* predict info: precision:0.771

recall:0.770 f1-score:0.769

************************* KNN

************************* predict info: precision:0.652 recall:0.5 f1-score:0.5

************************* SVM

************************* predict info: precision:0.819 recall:0.816 f1-score:0.816

************************* KMeans

************************* predict info: precision:0.2 recall:0.313 f1-score:0.266

关于Python更多的学习资料将继续更新,敬请关注本博客和新浪微博Rachel Zhang。

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- gamedaodao.net 版权所有 湘ICP备2024080961号-6

违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务