ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Criminal Detection: Study of Activation Functions and Optimizers

Journal: International Journal of Advanced Trends in Computer Science and Engineering (IJATCSE) (Vol.10, No. 2)

Publication Date:

Authors : ;

Page : 1145-1149

Keywords : Activation Functions; Convolutional Neural Networks; Deep Learning;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Face is the primary means of recognizing a person, transmitting information, communicating with others, and inferring people's feelings, among others. Our faces will reveal more than we think. A facial image may show personal characteristics such as ethnicity, gender, age, fitness, emotion, psychology, and occupation. In addition to the recent specialisation of deep learning models, the exponential output and memory space growth of computer machines has greatly increased the role of images in recognising semantic patterns. Facial photographs can reveal those personality features in the same way as a textual message on social media reveals the author's individual characteristics. We investigate a new degree of image comprehension by using deep learning to infer a criminal proclivity from facial images. A convolutional neural network (CNN) deep learning model is used to differentiate between criminal and non-criminal facial images. Using tenfold cross-validation on a set of 5500 face pictures, the model's confusion matrix, training, and test accuracies are registered. In learning to achieve the highest test accuracy, CNN was more reliable than the SNN, which was 8% better than the SNN's test accuracy. Finally, CNN's dissection and visualization of convolutional layers showed that CNN distinguished the two sets of images based on the shape of the face, eyebrows, top of the eye, pupils, nostrils, and lips. In this project we focus on Activation functions and optimizers. Activation functions are of two types Saturated and Non-Saturated. Here we use non saturated activation functions like ReLU, SELU and SOFTMAX. When we combine ReLU and SOFTMAX, we get 99.3 percentages as test accuracy. By combining SELU and SOFTMAX we get 99.6 as test accuracy. Therefore, SELU and SOFTMAX combination give the better accuracy

Last modified: 2021-04-12 16:33:50