Alzheimer’s disease (AD) is a neurodegenerative disease that affects a large number of people across the globe. Even though AD is one of the most commonly seen brain disorders, it is difficult to detect and it requires a categorical representation of features to differentiate similar patterns. Research into more complex problems, such as AD detection, frequently employs neural networks. Those approaches are regarded as well-understood and even sufficient by researchers and scientists without formal training in artificial intelligence. Thus, it is imperative to identify a method of detection that is fully automated and user-friendly to non-AI experts. The method should find efficient values for models’ design parameters promptly to simplify the neural network design process and subsequently democratize artificial intelligence. Further, multi-modal medical image fusion has richer modal features and a superior ability to represent information. A fusion image is formed by integrating relevant and complementary information from multiple input images to facilitate more accurate diagnosis and better treatment. This study presents a MultiAz-Net as a novel optimized ensemble-based deep neural network learning model that incorporate heterogeneous information from PET and MRI images to diagnose Alzheimer’s disease. Based on features extracted from the fused data, we propose an automated procedure for predicting the onset of AD at an early stage. Three steps are involved in the proposed architecture: image fusion, feature extraction, and classification. Additionally, the Multi-Objective Grasshopper Optimization Algorithm (MOGOA) is presented as |