Abstract:
Embedded systems and smartphones are vital in real-time applications, inspiring our interaction with technology. Smartphones possess various sensors like accelerometers, gyroscopes, and magnetometers. Deep Learning (DL) models enhance the capabilities of sensors, enabling them to perform real-time analysis and decision-making with accuracy and speed. This study demonstrates an intelligent system that detects smartphone movements using deep learning (DL) techniques such as convolutional neural networks (CNN) and stacked autoencoders(SAEs). The dataset has six smartphone movements, with 921 samples split into 695 for training and 226 for testing. The best training performance was achieved by Auto-Encoder 1 and Auto-Encoder 2. The SAEs had high classification accuracy (CA) and AUC values of 0.996 and 1.0, respectively. Similarly, CNN performed well with CA and AUC values of 0.991 and 0.998. These results show that CNN and SAEs are effective in identifying smartphone movements. The findings help improve smartphone apps and understand how well they can identify movement. The study indicates that CNN and SAE are influential in accurately identifying smartphone movements. Future research can improve motion detection by integrating more sensor data and advanced models. Using advanced deep learning architectures like RNNs or transformers can enhance the understanding and accuracy of predicting smartphone movements.