Abstract:
Living in a smart city has many advantages, such as improved management of waste and water, access to quality healthcare
facilities, effective and safe transportation systems, and personal protection. When a system is capable of providing explanations for
its judgments or predictions, it is termed Explainable AI (XAI). This term describes a model, its expected impacts, and any potential
biases that might be present. There are tools and frameworks known as Explainable AI that can aid in comprehending and having
faith in the output and outcomes generated by machine learning algorithms. These advancements are vulnerable to a diverse array of
security issues, including theft of information, covert listening attacks, obstruction of service, delays in communication, manipulation
of data, cyber-attacks on IoT security, interception of communication, disruption by interference signals, malfunctioning sensors,
insecure application programming interfaces (APIs), and exploitation from a remote location. The proposed framework for Explainable
Artificial Intelligence (XAI) in smart city applications was found to be extremely accurate (99.9%) in detecting attacks using logistic
regression models. On the test set, the logistic regression model performed flawlessly with an accuracy, precision, recall, and F1 score of
1.0000 (99.9%). Therefore, the proposed model predicts correctly in all cases, with no false positives, false negatives, or misclassifications.