An improved memory management scheme for large scale graph computing engine GraphChi

Yifang Jiang, Diao Zhang, Kai Chen, Qu Zhou, Yi Zhou, Jianhua He

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.

Original languageEnglish
Title of host publicationProceedings : 2014 IEEE international conference on Big Data
EditorsJimmy Lin, Jian Pei, Xiaohua Hu, Wo Chang, Raghunath Nambiar, Charu Aggarwal, Nick Cercone, Vasant Honavar, Jun Huan, Bamshad Mobasher, Saumyadipta Pyne
PublisherIEEE
Pages58-63
Number of pages6
ISBN (Print)978-1-4799-5665-4
DOIs
Publication statusPublished - 2015
Event2nd IEEE International Conference on Big Data - Washington DC, United States
Duration: 27 Oct 201430 Oct 2014

Conference

Conference2nd IEEE International Conference on Big Data
Abbreviated titleIEEE Big Data 2014
CountryUnited States
CityWashington DC
Period27/10/1430/10/14

Bibliographical note

Funding: National Natural Science Foundation of China (Grant No. 61201384, 61129001), and Shanghai Science and Technology Committees of Scientific Research Project (Grant No. 14DZ1101200).

Keywords

  • big data
  • graph process
  • GraphChi
  • part-in-memory mode

Fingerprint Dive into the research topics of 'An improved memory management scheme for large scale graph computing engine GraphChi'. Together they form a unique fingerprint.

  • Cite this

    Jiang, Y., Zhang, D., Chen, K., Zhou, Q., Zhou, Y., & He, J. (2015). An improved memory management scheme for large scale graph computing engine GraphChi. In J. Lin, J. Pei, X. Hu, W. Chang, R. Nambiar, C. Aggarwal, N. Cercone, V. Honavar, J. Huan, B. Mobasher, & S. Pyne (Eds.), Proceedings : 2014 IEEE international conference on Big Data (pp. 58-63). IEEE. https://doi.org/10.1109/BigData.2014.7004357