On stopwords, filtering and data sparsity for sentiment analysis of Twitter

Hassan Saif, Miriam Fernández, Yulan He, Harith Alani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space
Original languageEnglish
Title of host publicationLREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings
EditorsNicoletta Calzolari, Khalid Choukri, Thierry Declerck, et al
Pages810-817
Number of pages8
Publication statusPublished - 2014
Event9th International Conference on Language Resources and Evaluation - Iceland, Reykjavik, Iceland
Duration: 26 May 201431 May 2014

Conference

Conference9th International Conference on Language Resources and Evaluation
Abbreviated titleLREC 2014
CountryIceland
CityReykjavik
Period26/05/1431/05/14

Fingerprint

Classifiers

Bibliographical note

The LREC 2014 Proceedings are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Keywords

  • sentiment analysis
  • stopwords
  • data sparsity

Cite this

Saif, H., Fernández, M., He, Y., & Alani, H. (2014). On stopwords, filtering and data sparsity for sentiment analysis of Twitter. In N. Calzolari, K. Choukri, T. Declerck, & et al (Eds.), LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings (pp. 810-817)
Saif, Hassan ; Fernández, Miriam ; He, Yulan ; Alani, Harith. / On stopwords, filtering and data sparsity for sentiment analysis of Twitter. LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings. editor / Nicoletta Calzolari ; Khalid Choukri ; Thierry Declerck ; et al. 2014. pp. 810-817
@inproceedings{6dcbeb3de7974177bfd1c1935b51afc6,
title = "On stopwords, filtering and data sparsity for sentiment analysis of Twitter",
abstract = "Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space",
keywords = "sentiment analysis, stopwords, data sparsity",
author = "Hassan Saif and Miriam Fern{\'a}ndez and Yulan He and Harith Alani",
note = "The LREC 2014 Proceedings are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License",
year = "2014",
language = "English",
isbn = "978-2-9517408-8-4",
pages = "810--817",
editor = "Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and {et al}",
booktitle = "LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings",

}

Saif, H, Fernández, M, He, Y & Alani, H 2014, On stopwords, filtering and data sparsity for sentiment analysis of Twitter. in N Calzolari, K Choukri, T Declerck & et al (eds), LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings. pp. 810-817, 9th International Conference on Language Resources and Evaluation, Reykjavik, Iceland, 26/05/14.

On stopwords, filtering and data sparsity for sentiment analysis of Twitter. / Saif, Hassan; Fernández, Miriam; He, Yulan; Alani, Harith.

LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings. ed. / Nicoletta Calzolari; Khalid Choukri; Thierry Declerck; et al. 2014. p. 810-817.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - On stopwords, filtering and data sparsity for sentiment analysis of Twitter

AU - Saif, Hassan

AU - Fernández, Miriam

AU - He, Yulan

AU - Alani, Harith

N1 - The LREC 2014 Proceedings are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

PY - 2014

Y1 - 2014

N2 - Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space

AB - Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space

KW - sentiment analysis

KW - stopwords

KW - data sparsity

M3 - Conference contribution

SN - 978-2-9517408-8-4

SP - 810

EP - 817

BT - LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings

A2 - Calzolari, Nicoletta

A2 - Choukri, Khalid

A2 - Declerck, Thierry

A2 - et al,

ER -

Saif H, Fernández M, He Y, Alani H. On stopwords, filtering and data sparsity for sentiment analysis of Twitter. In Calzolari N, Choukri K, Declerck T, et al, editors, LREC 2014, Ninth International Conference on Language Resources and Evaluation. Proceedings. 2014. p. 810-817