© 2021, TUBITAK. All rights reserved.Especially since the early 2000s, deep learning techniques have been known as the most important actors of the field of artificial intelligence. Although these techniques are widely used in many different areas, their successful performance in the field of healthcare attracts more attention. However, the situation that these techniques are optimized with much more parameters than traditional machine learning techniques causes complex solution processes and they become opaque against human-sided perception level. For this reason, alternative studies have been carried out in order to make such black-box intelligent systems consisting of deep learning techniques reliable and understandable in terms of their limitations or error-making tendencies. As a result of the developments, the solutions that led to the introduce of a sub-field called as explainable artificial intelligence allow understanding whether the solutions offered by deep learning techniques are safe. In this study, a Convolutional Neural Networks (CNN) model was used for brain tumor detection and the safety level of that model could be understood through an explanatory module supported by the Class Activation Mapping (CAM). For the application process on the target data set, the developed CNN-CAM system achieved an average accuracy of 96.53%, sensitivity of 96.10% and specificity of 95.72%. Also, feedback provided by the doctors regarding the CAM visuals and the overall system performance showed that the CNN-CAM based solution was accepted positively. These findings reveal that the CNN-CAM system is reliable and understandable in terms of tumor detection.