IJMTES – ENERGY PROFICIENT FORECASTING OF MAP REDUCE FOR BIG DATA REAL TIME APPLIANCES

Journal Title : International Journal of Modern Trends in Engineering and Science

Paper Title : ENERGY PROFICIENT FORECASTING OF MAP REDUCE FOR BIG DATA REAL TIME APPLIANCES

Author’s Name : A Kaliappan | M Sabarivelunnamed

Volume 04 Issue 03 2017

ISSN no:  2348-3121

Page no: 24-29

Abstract – Now a days, data mining applications become hard and obsolete over time. Energy depletion is the main problem for more of the corporate firms. Supplementary workload and more computational processes will increase high energy cost. Incremental processing is a potential approach to rejuvenate mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. In this paper, we propose Energy Map Reduce Scheduling Algorithm, a narrative incremental processing extension to Map Reduce, the most widely used framework for mining big data. Map reduce is a programming model for processing and generating large amount of data in parallel time. In this paper, EMRSA is algorithm provide more energy and less maps. Priority based scheduling is a task will allocate the schedules based on necessary and utilization of the Jobs. For reducing the maps, it will reduce the system work so easily energy has improved. Final results show the experimental comparison of the different algorithms involved in the paper.

KeywordsBig Data, EMRSA, Map Reduce, Incremental Processing

Reference

  1. S. Lloyd, “Least squares quantization in PCM,” IEEE Trans. Inform. Theory., vol. 28, no. 2, pp. 129–137, Mar. 1982.
  2. R. Agrawal and R. Srikant, “Fast algorithms for mining association rules in large databases,” in Proc. 20th Int. Conf. Very Large Data Bases, 1994, pp. 487–499.
  3. S. Brin, and L. Page, “The anatomy of a large-scale hyper textual web search engine,” Comput. Netw. ISDN Syst., vol. 30, no. 1–7, pp. 107–117, Apr. 1998.
  4. J. Dean and S. Ghemawat, “Map reduce: Simplified data processing on large clusters,” in Proc. 6th Conf. Symp. Opear. Syst. Des. Implementation, 2004, p. 10.
  5. R. Power and J. Li, “Piccolo: Building fast, distributed programs with partitioned tables,” in Proc. 9th USENIX Conf. Oper. Syst. Des. Implementation, 2010, pp. 1–14.
  6. G. Malewicz, M. H. Austern, A. J. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski, “Pregel: A system for large-scale graph processing,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2010, pp. 135–146.
  7. Y. Bu, B. Howe, M. Balazinska, and M. D. Ernst, “Haloop: Efficient iterative data processing on large clusters,” in Proc. VLDB Endowment, 2010, vol. 3, no. 1–2, pp. 285–296.
  8. J. Ekanayake, H. Li, B. Zhang, T. Gunarathne, S.-H. Bae, J. Qiu, and G. Fox, “Twister: A run time for iterative map reduce,” in Proc. 19th ACM Symp. High Performance Distributed Comput., 2010, pp. 810–818.
  9. D. Peng and F. Dabek, “Large-scale incremental processing using distributed transactions and notifications,” in Proc. 9th USENIX Conf. Oper. Syst. Des. Implementation, 2010, pp. 1–15.
  10. D. Logothetis, C. Olston, B. Reed, K. C. Webb, and K. Yocum, “Stateful bulk processing for incremental analytic,” in Proc. 1st ACM Symp. Cloud Comput., 2010, pp. 51–62.
  11. J. Cho and H. Garcia-Molina, “The evolution of the web and implications for an incremental crawler,” in Proc. 26th Int. Conf. Very Large Data Bases, 2000, pp. 200–209.
  12. C. Olston and M. Najork, “Web crawling,” Found. Trends Inform. Retrieval, vol. 4, no. 3, pp. 175–246, 2010.
  13. P. Bhatotia, A. Wieder, R. Rodrigues, U. A. Acar, and R. Pasquin, “Incoop: Map reduce for incremental computations,” in Proc. 2nd ACM Symp. Cloud Comput., 2011, pp. 7:1–7:14.
  14. Y. Zhang, Q. Gao, L. Gao, and C. Wang, “Priter: A distributed framework for prioritized iterative computations,” in Proc. 2nd ACM Symp. Cloud Comput., 2011, pp. 13:1–13:14.
  15. T. J€org, R. Parvizi, H. Yong, and S. Dessloch, “Incremental recomputations in map reduce,” in Proc. 3rd Int. Workshop Cloud Data Manage., 2011, pp. 7–14.
  16. Y. Zhang, Q. Gao, L. Gao, and C. Wang, “imapreduce: A distributed computing framework for iterative computation,” J. Grid Comput., vol. 10, no. 1, pp. 47–68, 2012.
  17. M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica, “Resilient distributed datasets: A fault-tolerant abstraction for, in-memory cluster computing,” in Proc. 9th USENIX Conf. Netw. Syst. Des. Implementation, 2012, p. 2.
  18. S. R. Mihaylov, Z. G. Ives, and S. Guha, “Rex: Recursive, delta based data-centric computation,” in Proc. VLDB Endowment, 2012, vol. 5, no. 11, pp. 1280–1291.
  19. Y.Zhang,Q.Gao,L.Gao,andC.Wang,“Acceleratelarge-scale iterative computation through asynchronous accumulative updates,” in Proc.3rdWorkshopSci.Cloud Comput.Date,2012,pp.13–22.
  20. C. Yan, X. Yang, Z. Yu, M. Li, and X. Li, “IncMR: Incremental data processing based on map reduce,” in Proc. IEEE 5th Int. Conf. Cloud Comput., 2012, pp.pp. 534–541.
  21. Y. Low, D. Bickson, J. Gonzalez, C. Guestrin, A. Kyrola, and J. M. Hellerstein, “Distributed graphlab: A framework for machine learning and data mining in the cloud,” in Proc. VLDB Endowment, 2012, vol. 5, no. 8, pp. 716–727.
  22. S. Ewen, K. Tzoumas, M. Kaufmann, and V. Markl, “Spinning fast iterative data flows,” in Proc. VLDB Endowment, 2012, vol. 5, no. 11, pp. 1268–1279.
  23. D. G. Murray, F. McSherry, R. Isaacs, M. Isard, P. Barham, and M. Abadi, “Naiad: A timely data flow system,” in Proc. 24th ACM Symp. Oper. Syst. Principles, 2013, pp. 439–455.
  24. U. Kang, C. Tsourakakis, and C. Faloutsos, “Pegasus: A peta-scale graph mining system implementation and observations,” in Proc. IEEE Int. Conf. Data Mining, 2009, pp. 229–238.
  25. Y. Zhang, S. Chen, Q. Wang, and G. Yu, “i2mapreduce: Incremental map reduce for mining evolving big data,” CoRR, vol. abs/ 1501.04854, 201.