Journal Title : International Journal of Modern Trends in Engineering and Science


Author’s Name : H Srivigneshwari | U Neeraja  unnamed

Volume 03 Issue 10 2016

ISSN no:  2348-3121

Page no: 81-84

Abstract – Face recognition under blur, illumination and pose is a difficult task in image processing. The existing method performs good under illumination and pose, but fail in case of blurring. So we propose a new method for blurred face recognition. First the structure and texture blur-invariant features are extracted and the complete description on blurred image is generated by fusing those features .LPQ is extracted  in a densely sampled way and to enhance its performance a vector of locally aggregated descriptors (VLAD) is employed for texture blur-invariant feature. The histogram of oriented gradient (HOG) is used for structure blur-invariant feature. Then the improved HOG is extracted and then fused with the original HOG by canonical correlation analysis (CCA). For handling pose and illumination variations, we follow MOBILAP algorithm. The expected results demonstrate our improvements and performance in blurred face recognition.   

Keywords— Face Recognition; Illumination; Invariant Texture; Non-Uniform Motion


  1. Ojansivu, V., and Heikkilä, J., “Blur insensitive texture classification using local phase quantization,” Image andsignal processing, 236-243 (2008).
  2. Wang, W., and Cao, Z., “Recognition of blurred faces using local phase pattern,” Electron. Lett., 48(20),1269-1271 (2012).
  3. Ahonen, T., Rahtu, E., Ojansivu, V. et al., “Recognition of blurred faces using local phase quantization,” Proc.International Conference on Pattern Recognition (ICPR), 1-4 (2008).
  4. Wang, J., Yang, J., Yu, K. et al., “Locality-constrained linear coding for image classification,” Proc. IEEEConference on Computer Vision and Pattern Recognition (CVPR), 3360-3367 (2010).
  5. Li, L.-J., and Fei-Fei, L., “What, where and who? classifying events by scene and object recognition,” Proc.IEEE International Conference onComputer Vision (ICCV), 1-8 (2007).
  6. Jegou, H., Douze, M., Schmid, C. et al., “Aggregating local descriptors into a compact image representation,”Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3304-3311 (2010).
  7. Arandjelovic, R., and Zisserman, A., “All about VLAD,” Proc. IEEE Conference on Computer Vision andPattern Recognition (CVPR), 1578-1585 (2013).
  8. Dalal, N., and Triggs, B., “Histograms of oriented gradients for human detection,” Proc. IEEE Conference onComputer Vision and Pattern Recognition (CVPR), 886-893 (2005).
  9. Hardoon, D., Szedmak, S., and Shawe-Taylor, J., “Canonical correlation analysis: An overview with applicationto learning methods,” Neural Comput., 16(12), 2639-2664 (2004).
  10. Gupta, N. Joshi, L. Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proc. Eur. Conf. Comput.Vis., 2010, pp. 171–184.
  11. Z. Hu and M.-H. Yang, “Fast non-uniform deblurring using constrainedcamera pose subspace,” in Proc. Brit. Mach. Vis. Conf., 2012, pp. 1–11.
  12. Paramanand and A. N. Rajagopalan, “Non-uniform motion deblurringfor bilayer scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,Jun. 2013, pp. 1115–1122.
  13. H. Hu and G. de Haan, “Adaptive image restoration based on local robustblur estimation,” in Proc. 9th Int. Conf. Adv. Concepts Intell. Vis. Syst.,2007, pp. 461–472.
  14. M. Nishiyama, A. Hadid, H. Takeshima, J. Shotton, T. Kozakaya, andO. Yamaguchi, “Facial deblur inference using subspace analysis for recognition of blurred faces,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 33, no. 4, pp. 838–845, Apr. 2011.
  15. H. Zhang, J. Yang, Y. Zhang, N. M. Nasrabadi, and T. S. Huang,“Close the loop: Joint blind image restoration and recognition withsparse representation prior,” in Proc. Int. Conf. Comput. Vis., Nov. 2011,pp. 770–777.
  16. T. Ahonen, E. Rahtu, V. Ojansivu, and J. Heikkila, “Recognition faces using local phase quantization,” in Proc. 19th Int. Conf.Pattern Recognit., Dec. 2008, pp. 1–4.
  17. R. Gopalan, S. Taheri, P. Turaga, and R. Chellappa, “A blur-robust descriptor with applications to face recognition,” IEEE Trans. PatternAnal. Mach. Intell., vol. 34, no. 6, pp. 1220–1226, Jun. 2012.
  18. Stainvas and N. Intrator, “Blurred face recognition via a hybrid network architecture,” in Proc. 15th Int. Conf. Pattern Recognit., vol. 2.Sep. 2000, pp. 805–808.
  19. P. Vageeswaran, K. Mitra, and R. Chellappa, “Blur and illumination robust face recognition via set-theoretic characterization,” IEEE Trans.Image Process., vol. 22, no. 4, pp. 1362–1372, Apr. 2013.
  20. K.-C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 27, no. 5, pp. 684–698, May 2005.
  21. S. Biswas, G. Aggarwal, and R. Chellappa, “Robust estimation of albedofor illumination-invariant matching and shape recovery,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 31, no. 5, pp. 884–899, May 2009.
  22. T. Zhang, Y. Y. Tang, B. Fang, Z. Shang, and X. Liu, “Face recognitionunder varying illumination using gradient faces,” IEEE Trans. Image Process., vol. 18, no. 11, pp. 2599–2606, Nov. 2009.
  23. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Analysis and Modeling ofFaces and Gestures (Lecture Notes in Computer Science), vol. 4778.Berlin, Germany: Springer-Verlag, 2007, pp. 168–182.
  24. G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Subspace learning fromimage gradient orientations,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 34, no. 12, pp. 2454–2466, Dec. 2012.
  25. X. Zhang and Y. Gao, “Face recognition across pose: A review,” PatternRecognit., vol. 42, no. 11, pp. 2876–2896, Nov. 2009.