Publications internationales

2024
Hussein Ahmed Ali, Walid Hariri, N Smaoui Zghal, D Ben Aissa. (2024), Comparing PCA-Based Machine Learning Algorithms for COVID-19 Classification Using Chest X-ray Images. Baghdad Science Journal : University of Baghdad, https://bsj.uobaghdad.edu.iq/index.php/BSJ/workflow/access/9422

Résumé: The COVID-19 pandemic, which appeared in December 2019, has significantly influenced people's lives worldwide. Because the virus is unpredictable, several technologies are being used to assist in identifying COVID-19 patients and reducing illness transmission. Chest X-ray (CXR) imaging is a typical and cost-effective technique for suspected COVID-19 cases. However, the use of technology can further enhance the diagnostic process. An extensive dataset of CXR images has been made available on the Kaggle website, categorized into five classes to address this issue. Dealing with such comprehensive image data requires preprocessing to ensure speed and accuracy. The image preprocessing steps include converting the images to grayscale, adjusting image intensity, resizing the images, and extracting significant features using Principal Component Analysis (PCA) techniques. After preprocessing the dataset, various prediction models and Machine Learning (ML) algorithms are utilized to classify CXR images and predict whether a person will likely be diagnosed with COVID-19. The models used include Decision Tree (DT), Random Forest (RF), Stochastic Gradient Descent (SGD), Logistic Regression (LR), Gaussian Naive Bayes (GNB), and K-nearest Neighbors (KNN). Among these models, the DT approach has shown the highest accuracy compared to the others, such as GNB, KNN, SGD, LR, and RF. Usually, DT also demonstrates the best-weighted average across all evaluation parameters, including precision, sensitivity, and F1-score. However, it is essential to highlight that selecting the best ML model depends on some criteria, including the dataset, its characteristics, and implementation specifics. Therefore, it is crucial to consider these factors when selecting an appropriate ML model for COVID-19 diagnosis using CXR image classification.

2023
Hussein Ahmed Ali, Walid Hariri, N Smaoui Zghal, D Ben Aissa. (2023), Fast Hybrid Deep Neural Network for Diagnosis of COVID-19 using Chest X-Ray Images. International Journal of Advanced Computer Science and Applications : Science and Information Organization, https://thesai.org/Publications/ViewPaper?Volume=14&Issue=3&Code=IJACSA&SerialNo=64

Résumé: In the last three years, the coronavirus (COVID-19) pandemic put healthcare systems worldwide under tremendous pressure. Imaging techniques, such as Chest X-Ray (CXR) images, play an essential role in diagnosing many diseases (for example, COVID-19). Recently, intelligent systems (Machine Learning (ML) and Deep Learning (DL)) have been widely utilized to identify COVID-19 from other upper respiratory diseases (such as viral pneumonia and lung opacity). Nevertheless, identifying COVID-19 from the CXR images is challenging due to similar symptoms. To improve the diagnosis of COVID-19 using CXR images, this article proposes a new deep neural network model called Fast Hybrid Deep Neural Network (FHDNN). FHDNN consists of various convolutional layers and various dense layers. In the beginning, we preprocessed the dataset, extracted the best features, and expanded it. Then, we converted it from two dimensions to one dimension to reduce training speed and hardware requirements. The experimental results demonstrate that preprocessing and feature expansion before applying FHDNN lead to better detection accuracy and reduced speedy execution. Furthermore, the model FHDNN outperformed the counterparts by achieving an accuracy of 99.9%, recall of 99.9%, F1-Score has 99.9%, and precision of 99.9% for the detection and classification of COVID-19. Accordingly, FHDNN is more reliable and can be considered a robust and faster model in COVID-19 detection.

Imed-Eddine Haouli, Walid Hariri, Hassina Seridi-Bouchelaghem. (2023), COVID-Attention: Efficient COVID19 Detection using Pre-trained Deep Models based on Vision Transformers and X-ray Images. International Journal of Artificial Intelligence Tools : World Scientific, https://www.worldscientific.com/doi/10.1142/S021821302350046X
Thair A Kadhim, Walid Hariri, N Smaoui Zghal, D Ben Aissa. (2023), A face recognition application for Alzheimer’s patients using ESP32-CAM and Raspberry Pi . Journal of Real-Time Image Processing : Springer Verlag, https://link.springer.com/article/10.1007/s11554-023-01357-w

Résumé: This paper proposes a real-time face recognition application to aid people living with Alzheimer’s in identifying the people around them. This is achieved by developing a portable system consisting of glasses with an ESP32-CAM and a single-board microcomputer (the Raspberry Pi). The proposed system operates automatically and does not require physical interaction with the user. It utilizes wireless technologies to capture real-time video frames of human faces and transmit them (via Wi-Fi) to the Raspberry Pi, which detects and recognizes the captured human face and sends voice-activated feedback to the user’s ears over Bluetooth to pronounce their name. Several incompatibility challenges are encountered and appropriately handled during the system’s development, integration, and testing processes. A fully functional prototype is developed and tested successfully. When compared to the state-of-the-art, the obtained results have demonstrated superior performance in terms of a training accuracy of 99.46% and a face recognition accuracy of 99.48%. The entire processing time from capturing the human face to generating the voice message is found to be about one second (730 ms on a laptop and 1109 ms on a Raspberry Pi). The developed technology is anticipated to improve the patient’s quality of life and reduce their dependence on others.

2022
Walid Hariri. (2022), Efficient Masked Face Recognition Method during the COVID-19 Pandemic. . Signal, Image and Video Processing : Springer London, https://link.springer.com/article/10.1007/s11760-021-02050-w

Résumé: The coronavirus disease (COVID-19) is an unparalleled crisis leading to a huge number of casualties and security problems. In order to reduce the spread of coronavirus, people often wear masks to protect themselves. This makes face recognition a very difficult task since certain parts of the face are hidden. A primary focus of researchers during the ongoing coronavirus pandemic is to come up with suggestions to handle this problem through rapid and efficient solutions. In this paper, we propose a reliable method based on occlusion removal and deep learning-based features in order to address the problem of the masked face recognition process. The first step is to remove the masked face region. Next, we apply three pre-trained deep Convolutional Neural Networks (CNN) namely, VGG-16, AlexNet, and ResNet-50, and use them to extract deep features from the obtained regions (mostly eyes and forehead regions). The Bag-of-features paradigm is then applied to the feature maps of the last convolutional layer in order to quantize them and to get a slight representation comparing to the fully connected layer of classical CNN. Finally, Multilayer Perceptron (MLP) is applied for the classification process. Experimental results on Real-World-Masked-Face-Dataset show high recognition performance compared to other state-of-the-art methods.

2021
Walid Hariri, Nadir Farah. (2021), Recognition of 3D Emotional Facial expression based on Handcrafted and Deep Feature Combination. Pattern Recognition Letters. : Elsevier, https://www.sciencedirect.com/science/article/abs/pii/S0167865521001744

Résumé: Facial emotion recognition (FER) methods have been proposed mainly using 2D images. These methods suffer from many problems caused by the difficult conditions of unconstrained environments such as light conditions and view variations. In this paper, we aim to recognize the emotional facial expressions independently of their identity using the 3D data and 2D depth images. Since the 3D FER is a very fine-grained recognition task, mapping the 3D images into 2D depth images may lack some geometric characteristics of the expressive face and decay the FER performance. Convolutional Neural Networks (CNN), however, have been successfully applied to the 2D depth images and improved handcrafted-based methods in computer vision and pattern recognition applications. For this reason, we combine in this paper two types of features; handcrafted and deep learning features and prove their complementarity for 3D FER. Favorably, covariance descriptors have proven a very good ability to combine features from different types into a compact representation. Therefore, we propose to use the covariance matrices of features (handcrafted and deep ones), instead of the features independently. Since covariance matrices belong to one of the manifold space types, formed by SPD (Symmetric Positive Definite) matrices, we mainly focus on the generalization of the RBF kernel to the manifold space for 3D FER using a supervised SVM classification. The achieved performance of the proposed method on the Bosphorus and BU-3DFE datasets outperforms similar state-of-the-arts.

Walid Hariri, Ali Narin. (2021), Deep Neural Networks for COVID-19 Detection and Diagnosis using Images and Acoustic-based Techniques: A Recent Review . Soft Computing : Springer Verlag, https://link.springer.com/article/10.1007/s00500-021-06137-x

Résumé: The new coronavirus disease (COVID-19) has been declared a pandemic since March 2020 by the World Health Organization. It consists of an emerging viral infection with respiratory tropism that could develop atypical pneumonia. Experts emphasize the importance of early detection of those who have the COVID-19 virus. In this way, patients will be isolated from other people and the spread of the virus can be prevented. For this reason, it has become an area of interest to develop early diagnosis and detection methods to ensure a rapid treatment process and prevent the virus from spreading. Since the standard testing system is time-consuming and not available for everyone, alternative early screening techniques have become an urgent need. In this study, the approaches used in the detection of COVID-19 based on deep learning (DL) algorithms, which have been popular in recent years, have been comprehensively discussed. The advantages and disadvantages of different approaches used in literature are examined in detail. We further present the databases and major future challenges of DL-based COVID-19 detection. The computed tomography of the chest and X-ray images gives a rich representation of the patient’s lung that is less time-consuming and allows an efficient viral pneumonia detection using the DL algorithms. The first step is the preprocessing of these images to remove noise. Next, deep features are extracted using multiple types of deep models (pretrained models, generative models, generic neural networks, etc.). Finally, the classification is performed using the obtained features to decide whether the patient is infected by coronavirus or it is another lung disease. In this study, we also give a brief review of the latest applications of cough analysis to early screen the COVID-19 and human mobility estimation to limit its spread.

Walid Hariri et al. (2021), Deep and Statistical-Based Methods for Alzheimer's Disease Detection: A Survey. Journal of Computing Science and Engineering (JCSE) : Korean Institute of Information Scientists and Engineers, http://jcse.kiise.org/PublishedPaper/topic_abstract.asp?idx=396

Résumé: Detection of Alzheimer's disease (AD) is one of the most potent and daunting activities in the processing of medical imagery. The survey of recent AD detection techniques in the last 10 years is described in this paper. The AD detection process involves various steps, namely preprocessing, feature extraction, feature selection, dimensionality reduction, segmentation and classification. In this study, we reviewed the latest findings and possible patterns as well as their main contributions. Different types of AD detection techniques are also discussed. Based on the applied algorithms and methods, and the evaluated databases (e.g., ADNI and OASIS), the performances of the most relevant AD detection techniques are compared and discussed.

2017
Walid Hariri, Hedi Tabia, Nadir Farah, Abdallah Benouareth, David Declercq. (2017), 3D Facial Expression Recognition Using Kernel Methods on Riemannian Manifold. Engineering Applications of Artificial Intelligence Journal : Elsevier, https://www.sciencedirect.com/science/article/abs/pii/S0952197617301033

Résumé: Automatic human Facial Expressions Recognition (FER) is becoming of increased interest. FER finds its applications in many emerging areas such as affective computing and intelligent human computer interaction. Most of the existing work on FER has been done using 2D data which suffers from inherent problems of illumination changes and pose variations. With the development of 3D image capturing technologies, the acquisition of 3D data is becoming a more feasible task. The 3D data brings a more effective solution in addressing the issues raised by its 2D counterpart. State-of-the-art 3D FER methods are often based on a single descriptor which may fail to handle the large inter-class and intra-class variability of the human facial expressions. In this work, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D FER. Since covariance matrices are elements of the non-linear manifold of Symmetric Positive Definite (SPD) matrices, we particularly look at the application of manifold-based classification to the problem of 3D FER. We evaluate the performance of the proposed framework on the BU-3DFE and the Bosphorus datasets, and demonstrate its superiority compared to the state-of-the-art methods.

2016
Walid Hariri, Hedi Tabia, Nadir Farah, Abdallah Benouareth, David Declercq. (2016), 3D Face Recognition Using Covariance Based Descriptors. Pattern Recognition Letters Journal : Elsevier, https://www.sciencedirect.com/science/article/abs/pii/S0167865516300320

Résumé: In this paper, we propose a new 3D face recognition method based on covariance descriptors. Unlike feature-based vectors, covariance-based descriptors enable the fusion and the encoding of different types of features and modalities into a compact representation. The covariance descriptors are symmetric positive definite matrices which can be viewed as an inner product on the tangent space of (Symd+) the manifold of Symmetric Positive Definite (SPD) matrices. In this article, we study geodesic distances on the manifold and use them as metrics for 3D face matching and recognition. We evaluate the performance of the proposed method on the FRGCv2 and the GAVAB databases and demonstrate its superiority compared to other state of the art methods.

Livres

2018
Walid Hariri. (2018), Contribution à la reconnaissance/authentification de visages 2D/3D. Thèse de Doctorat : Cergy Pontoise University, https://www.theses.fr/2017CERG0905

Résumé: L’analyse de visages 3D y compris la reconnaissance des visages et des expressions faciales 3D est devenue un domaine actif de recherche ces dernières années. Plusieurs méthodes ont été développées en utilisant des images 2D pour traiter ces problèmes. Cependant, ces méthodes présentent un certain nombre de limitations dépendantes à l’orientation du visage, à l’éclairage, à l’expression faciale, et aux occultations. Récemment, le développement des capteurs d’acquisition 3D a fait que les données 3D deviennent de plus en plus disponibles. Ces données 3D sont relativement invariables à l’illumination et à la pose, mais elles restent sensibles à la variation de l’expression. L’objectif principal de cette thèse est de proposer de nouvelles techniques de reconnaissance/vérification de visages 3D et de reconnaissance d’expressions faciales 3D. Tout d’abord, une méthode de reconnaissance de visages en utilisant des matrices de covariance comme des descripteurs de régions de visages est proposée. Notre méthode comprend les étapes suivantes : le prétraitement et l’alignement de visages, un échantillonnage uniforme est ensuite appliqué sur la surface faciale pour localiser un ensemble de points de caractéristiques. Autours de chaque point, nous extrayons une matrice de covariance comme un descripteur de région du visage. Deux méthodes d’appariement sont ainsi proposées, et différentes distances (géodésiques / non-géodésique) sont appliquées pour comparer les visages. La méthode proposée est évaluée sur trois bases de visages GAVAB, FRGCv2 et BU-3DFE. La deuxième partie de cette thèse porte sur la reconnaissance des expressions faciales 3D. Pour ce faire, nous avons proposé d’utiliser les matrices de covariances avec les méthodes noyau. Dans cette contribution, nous avons appliqué le noyau de Gauss pour transformer les matrices de covariances en espace d’Hilbert. Cela permet d’utiliser les algorithmes qui sont déjà implémentés pour l’espace Euclidean (i.e. SVM) dans cet espace non-linéaire. Des expérimentations sont alors entreprises sur deux bases d’expressions faciales 3D (BU-3DFE et Bosphorus) pour reconnaitre les six expressions faciales prototypiques.

2017
Walid Hariri. (2017), Contribution to 2D/3D face recognition/authentification. PhD dissertation : tel.archives-ouvertes, https://tel.archives-ouvertes.fr/tel-01784155/document

Résumé: 3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied on the face surface to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasets including GAVAB, FRGCv2 and BU-3DFE. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions.

Communications internationales

2024
Walid Hariri, W, Baraka, F. D, & Boulemden, A. (2024), Predicting Energy Consumption in Smart Buildings Using Machine Learning and Environmental Variables.. 2024 International Conference of the African Federation of Operational Research Societies (AFROS) : IEEE, https://ieeexplore.ieee.org/abstract/document/11037093

Résumé: Over the past several decades, there has been a significant increase in energy consumption worldwide, with a substantial portion of this usage occurring in residential buildings. As a result, developing reliable tools for analyzing and forecasting energy consumption has become essential in the global effort to enhance sustainability. Machine learning (ML) has demonstrated exceptional performance in predictive tasks related to energy usage. In this paper, we evaluate and compare five ML and deep learning models to predict energy consumption in smart buildings, utilizing the publicly available KAG energy dataset. Our experimental results indicate that Long Short-Term Memory (LSTM) consistently outperforms Gradient Boosting (GBoost), Extreme Gradient Boosting (XGBoost), Multi-layer Perceptron (MLP), and Extra Trees (ExtraTr) models, offering superior predictive accuracy and robustness.

Walid Hariri, & Boumezaid, M. A. (2024), Automatic Weather Detection Using Vision Transformers and Adaptive Transfer Learning. 2024 International Conference of the African Federation of Operational Research Societies (AFROS) : IEEE, https://ieeexplore.ieee.org/abstract/document/11036919

Résumé: The ability to recognize weather patterns significantly impacts various aspects of daily life, such as weather forecasting, transportation, agriculture, and forest management. Machine learning techniques, particularly Convolutional Neural Networks (CNNs), are expected to provide more accurate weather pattern analysis than traditional radar systems. However, CNNs often struggle to capture multi-level dependencies in input images, and increasing the convolution filter size to address this issue also raises network complexity. Recently, Transformers, initially successful in Natural Language Processing (NLP), have been applied to computer vision tasks, with Vision Transformers (ViT) outperforming CNNs in many scenarios. This paper proposes a deep learning-based method for accurate detection and categorization of weather conditions into multiple classes using a transfer learning technique. Recognizing the challenge of determining the optimal number of trainable layers within a CNN and blocks within a ViT, we introduce an Adaptive Layer Freezing (ALF) technique to dynamically adjust trainable layers for optimized performance during the transfer learning process, referred to as Adaptive Transfer Learning. We assessed a new architecture using ViT-L32 and compared it to two CNN models: EfficientNetB0 and MobileNetV2. The models' performances were evaluated on two weather imaging databases, with ViT-L32 achieving the highest classification accuracy in both binary and multi-class classifications with 98.28 % and 96.45 % respectively.

2023
Walid Hariri et al. (2023), Advanced Deep Transfer Learning Using Ensemble Models for COVID-19 Detection from X-ray Images. Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Lisbon, Portugal : VISIGRAPP, VISAPP, ISBN 978-989-758-634-7, ISSN 2184-4321, pages 355-362, https://visapp.scitevents.org/Home.aspx
Imed-Eddine Haouli, Walid Hariri, Hassina Seridi-Bouchelaghem. (2023), Glaucoma Detection Using Optimal Batch Size for Transfer Learning and Ensemble Model Techniques. 12th International Conference on Information Systems and Advanced Technologies “ICISAT 2022” Intelligent Information, Data Science and Decision Support System : Springer, https://link.springer.com/chapter/10.1007/978-3-031-25344-7_19

Résumé: Glaucoma is a chronic disease resulting in vision loss characterized by gradual damage to the optic nerve. To prevent vision loss, early detection is the key solution that limits this disease. Deep learning algorithms, especially convolutional neural networks (CNNs), have recently demonstrated high robustness in medical image classification tasks. Nevertheless, to achieve this performance, CNNs need to fix the parameters before the training phase. In this paper, we investigate the impact of the batch size on the five fine-tuned pre-trained models for glaucoma detection using fundus images. Our proposal consists of finding the optimal batch size for each model, referred to by OBS. Moreover, to further enhance the performance, we have combined the models using the majority voting method, taking into account the OBS of each one. The results of five challenging datasets show that the ensemble model technique improves the performance of single-use models and outperforms similar state-of-the-arts.

Walid Hariri et al.. (2023), A Comprehensive Review of Sound-Based Modalities for Automatic COVID-19 Detection using Deep Learning-Based Techniques. Information Technologies & Smart Industrial Systems (ITSIS) : IEEE , https://ieeexplore.ieee.org/abstract/document/10118380/

Résumé: The World Health Organization has labeled the novel coronavirus illness (COVID-19) a pandemic since March 2020. It's a new viral infection with a respiratory tropism that could lead to atypical pneumonia. Thus, according to experts, early detection of the positive cases with people infected by the COVID-19 virus is highly needed. In this manner, patients will be segregated from other individuals, and the infection will not spread. As a result, developing early detection and diagnosis procedures to enable a speedy treatment process and stop the transmission of the virus has become a focus of research. Alternative early-screening approaches have become necessary due to the time-consuming nature of the current testing methodology such as Reverse transcription polymerase chain reaction (RT-PCR) test. The methods for detecting COVID-19 using deep learning (DL) algorithms using sound modality, which have become an active research area in recent years, have been thoroughly reviewed in this work. Although the majority of the newly proposed methods are based on medical images (i.e. X-ray and CT scans), we show in this comprehensive survey that the sound modality can be a good alternative to these methods, providing faster and easiest way to create a database with a high performance. We also present the most popular sound databases proposed for COVID-19 detection.

Walid Hariri et al.. (2023), A Comprehensive Survey on Glaucoma Disease Detection using Deep Learning-based Techniques. International Conference on Technological Advances in Electrical Engineering ICTAEE’23http://oldftech.univ-skikda.dz/ictaee2023/

Résumé: Glaucoma is one of the more challenging diseases because of its asymptomatic aspect. This neuropathic disease is the main cause of blindness and it manifests as a cup enlargement. The detection of glaucoma disease is generally performed by experts using manual methods, such as analyzing the images of defective eyes. Although these methods can detect the disease and prevent it from growing, these methods are expensive, time-consuming, and less accurate. Recently, researchers have developed new computer-aided diagnosis (CAD) systems to assist experts in detecting glaucoma. CAD systems are more cost-effective and accurate than traditional techniques, as they can quickly analyze large data sets of eye images and accurately detect abnormalities such as glaucoma. Therefore, deep learning (DL) has become a very good alternative to these manual methods due to its high performance to detect and diagnose diseases from different image modalities. In this survey, we give a recent overview of the best-proposed methods in the literature based on DL methods especially Convolutional Neural Networks (CNN). We present a taxonomy that divides the existing methods into two main groups: classification and segmentation, and nine subgroups according to the model architecture, data processing, region of interest, and the diagnosis decision strategy. Finally, we discuss the challenges, evaluate the performance of existing methods, and provide new perspectives for the development of glaucoma detection.

Hariri W et al. (2023), Exploring Vision Transformers for Automated Glaucoma Disease Diagnosis in Fundus Images. International Conference on Decision Aid Sciences and Applications (DASA) : IEEE, https://ieeexplore.ieee.org/abstract/document/10286714/

Résumé: Glaucoma, a challenging disease that can cause irreversible blindness if not detected early, has garnered significant interest in utilizing deep learning and medical imaging for automated diagnosis and categorization. This paper introduces a novel approach for automatic glaucoma detection by leveraging the advancements of Vision Transformers (ViT), which have shown excellent performance in computer vision tasks following their success in natural language processing. Although Convolutional neural networks (CNNs) are widely acknowledged as one of the best techniques for this task, they still face certain challenges. Therefore, extensive hyper-parameter optimizations are performed during the training phase to determine the optimal settings for each ViT model. Subsequently, a comparison is made between the results obtained from two top-performing CNN models. The study employs transfer learning on five retinal fundus image datasets. Experimental results conducted on the combined datasets demonstrate that the proposed ViT-based model is an effective tool for glaucoma detection, achieving an average accuracy of 92.67% using ViT-L32.

Hariri W et al. (2023), Alzheimer Disease Detection Using Advanced Transfer Learning Techniques on MRI Images. International Conference on Decision Aid Sciences and Applications (DASA) : IEEE, https://ieeexplore.ieee.org/abstract/document/10286802/

Résumé: Alzheimer's disease (AD) is a neurological disorder that has a profound impact on millions of individuals globally. It is imperative to identify and diagnose AD at an early stage with precision, as this plays a crucial role in timely intervention and effective treatment. In recent years, the utilization of magnetic resonance imaging (MRI) in conjunction with deep learning has demonstrated promising results for assisting in the detection of Alzheimer's disease (AD). This study introduces a methodology for detecting AD by employing MRI images and convolutional neural networks (CNNs). Five pre-trained CNN models, including MobileNetV2, ResNet50, EfficientNet-B0, InceptionV3, and DenseNet-201, are employed for the transfer learning (TL) task. A comparative study and detailed analysis of the performances achieved by each model are presented. The models are trained on a large dataset of MRI images, consisting of both healthy and AD-affected brains with hyper-parameters tuning. During training, the CNNs learn to automatically identify patterns and discriminative features that differentiate between normal and AD-affected brain regions. To evaluate the effectiveness of the proposed method, a comprehensive set of experiments is conducted using publicly available datasets. From the experimental study, it is observed that the TL approach using the five pretrained CNN models achieves good performances in the two class classification problem of Very Mild Demented and Non Demented individuals. The results of this paper contribute to the growing field of computer-aided diagnosis for AD and emphasize the effectiveness of deep learning techniques, particularly CNNs, in the analysis of medical imaging data.

Hariri W et al. (2023), A review study of ChatGPT applications in education. International Conference on Innovations in Intelligent Systems and Applications (INISTA) : IEEE, https://ieeexplore.ieee.org/abstract/document/10310439

Résumé: Recent advances in large-scale language models have pushed the boundaries of natural language processing and set new performance standards. It is amazing how convincingly artificial intelligence can imitate human behavior and writing style. Deep learning and natural language processing (NLP) have recently driven the development of large language models. One of these models is ChatGPT (Chat Generative Pre-trained Transformer), created by OpenAI in 2022 for chats with open-ended questions. Numerous fields such as education effectively used ChatGPT in several applications to create exam questions and answers, customise learning experiences, and facilitate online dialogues, among other things. It is an effective tool for natural language processing due to its adaptability and precision. An overview of ChatGPT in the educational space is given in this article. We review the recent innovations and powerful of ChatGPT in many education tasks, and future challenges.

Thair Kadhim; N. Smaoui Zghal; Walid Hariri; D. Ben Aissa. (2023), A Review of Alzheimer’s Disease and Emerging Patient Support Systems. 20th International Multi-Conference on Systems, Signals & Devices (SSD) : IEEE, https://ieeexplore.ieee.org/abstract/document/10411152

Résumé: Alzheimer's disease (AD) is the main and most common source of brain ill health and neurocognitive disorders. Dementia accounts for 80% of AD cases. This disease is considered one of the biggest challenges facing social organizations and medical care providers at present, whether they are within these organizations or at home. The most worrying problems facing the world are the significant rise in the prevalence of dementia, especially for those over the age of 65, with no treatment for this disease. Consequently, healthcare costs have increased steadily, which puts a burden on healthcare systems and puts pressure on families. Researchers have estimated that by 2050, there would be 152 million individuals worldwide living with dementia. The cost of dementia care is expected to exceed $1 trillion. Therefore, it is important for these patients to try to do what they can do for themselves, such as going out without a company and taking medicine on time for as long as possible. The aim of this study is to assist Alzheimer's disease patients to slow cognitive degradation and reduce the burden of care and psychological pressure on families and caregivers. The research reviews the existing studies into AD prevention, progression, planning, and relevant factors for risk reduction. Additionally, the study investigates assistance systems through computerized cognitive assessment, using technology tools and applications dealing with health information recording, drug monitoring, location, and behavior tracking.

Hussein Ahmed Ali, Walid Hariri, N. Smaoui Zghal, D. Ben Aissa. (2023), Machine Learning and Deep Learning in Chest X-Rays Images for COVID-19 Diagnosis: A Review. 20th International Multi-Conference on Systems, Signals & Devices (SSD) : IEEE, https://ieeexplore.ieee.org/abstract/document/10411209

Résumé: There is an ongoing worldwide spread of the coronavirus disease 2019 (COVID-19) pandemic. Medical imaging like Chest X-rays (CXR) is essential in the fight against COVID-19; Artificial Intelligence (AI) technologies help medical specialists and increase the efficiency of imaging tools. With various positive stories, Machine Learning (ML) and Deep Learning (DL) were utilized daily in many ways. Also, they were vital in dealing with the COVID-19 outbreak. Many scientific research institutions, companies, and states are fighting against the spread of COVID-19 through a quick diagnosis. The paper discusses the AI-based ML and DL approaches for diagnosing and treating COVID-19 by CXR. In addition, AI-based DL and ML methods, along with the provided tools, CXR images of datasets, and performance in tackling COVID-19, are summarized in this work. Dealing with data of the CXR images needs dataset preprocessing through choosing the optimal method for getting speed and best accuracy. This paper reviews different techniques for preprocessing the CXR images of the dataset before applying ML or DL models. The presented survey offers a thorough overview of current advanced ML and DL research methods and describes how ML and DL might enhance COVID-19 status diagnosis. Also, this study provides details on future directions and challenges, mainly focusing on integrating ML and DL using CXR. Finally, this survey offers a handy guide for the researchers to choose the best modality and the most convenient ML and DL algorithm to detect the COVID-19 virus automatically with high accuracy using CXR images.

Yunus Emre Erdoğan, Ali Narin, Walid Hariri. (2023), Comparison of Different Segmentations in Automated Detection of Hypertension Using Electrocardiography with Empirical Mode Decomposition. 7th International Conference on Engineering Technologies(ICENTE23) : Selcuk Universty Turkey, https://vb.vgtu.lt/object/elaba:187991060/187991060.pdf#page=136

Résumé: Hypertension (HPT) refers to a condition where the pressure exerted on the walls of arteries by blood pumped from the heart to the body reaches levels that can lead to various ailments. Annually, a significant number of lives are lost globally due to diseases linked to HPT. Therefore, the early and accurate diagnosis of HPT is of utmost importance. This study aimed to automatically and with minimal error detect patients suffering from HPT by utilizing electrocardiogram (ECG) signals. The research involved the collection of ECG signals from two distinct groups. These groups consisted of ECG data of both five thousand and ten thousand data points in length, respectively. The performance in HPT detection was evaluated using entropy measurements derived from the 5-layer Intrinsic Mode Function (IMF) signals through the application of the Empirical Mode Decomposition method. The resulting performances were compared based on the nine features extracted from each IMF. To summarize, employing the 5-fold cross-validation technique, the most exceptional accuracy rates achieved were 99.9991% and 99.9989% for ECG data of lengths five thousand and ten thousand, respectively, using decision tree algorithms. These remarkable performance results indicate the potential usefulness of this method in assisting medical professionals to identify individuals with HPT.

2022
Hussein Ahmed Ali, Walid Hariri, N Smaoui Zghal, D Ben Aissa. (2022), A Comparison of Machine Learning Methods for best Accuracy COVID-19 Diagnosis Using Chest X-Ray Images. IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT) : IEEE, https://ieeexplore.ieee.org/abstract/document/9875477

Résumé: Coronavirus (COVID-19) changed the view of people towards life in all the countries of the world in December 2019. The virus has made chaos that cannot be predicted. This problem requires using a variety of technologies to aid in the identification of COVID-19 patients and to control the disease spread. For suspected instances of COVID-19 disease, chest X-ray (CXR) imaging is a standard with fewer costs, but it does not need a COVID-19 examination approach without using technology to help for a suitable diagnosis. In response to this issue, a big dataset of CXR images was divided into four classes found on the website Kaggle. Dealing with large data of the images needs dataset reprocessing through choosing the optimal method for getting speed and best accuracy. Dataset reprocessing converts into gray level then adjust image intensity, resize and extract the best features then apply Machine Learning ML models. The use of different prediction models, ML algorithms, and their performances are calculated with evaluation on the dataset after reprocessing. Decision Tree (DT), Random Forest (RF), Stochastic Gradient Descent (SGD), Logistic Regression (LR), Gaussian Naive Bayes (GNB), and K-Nearest Neighbors (KNN) are models used to foretell the specialized who would be diagnosed with COVID-19 quickly by using CXR images classification. The KNN has revealed the best accuracy compared with the others such as GNB, DT, SGD, LR, and RF. Also, KNN has the best-weighted average for all parameters, which are precision, sensitivity, and F1-score compared with the other models.

Thair A Kadhim, N Smaoui Zghal, Walid Hariri, D Ben Aissa. (2022), Face Recognition in Multiple Variations Using Deep Learning and Convolutional Neural Networks. IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT) : IEEE, https://ieeexplore.ieee.org/abstract/document/9875530

Résumé: Face Recognition (FR) has been widely used in the tracking and identification of individuals. However, because face images vary depending on expressions, ages, individual locations, and lighting conditions, the facial photographs of the same sample may appear to be distinct, making face recognition more difficult. Deep learning (DL) is now a suitable solution for face recognition and computer vision. In this study, features and traits were extracted from images of a large data set (called FERET) consisting of 14,126 images that were divided into 80% for training data and 20% for testing data using a Convolutional Neural Network (CNN). The CNN is first pre-trained using supplementary data for the purpose of obtaining updated weights, and then trained with the target dataset in order to uncover more hidden facial characteristics. Three different deep learning models are implemented: AlexNet, Resnet18, and DenseNet-161. The performance of these models is compared experimentally in terms of their classification accuracy. The obtained results showed that the DenseNet-161 has the highest accuracy of 98.6%, while the accuracies of the Resnet18 and AlexNet are 96.3% and 93.3%, respectively.

2021
Walid Hariri, Bourefis. (2021), Accurate Transfer Learning and Deep Ensemble Models for COVID-19 Detection using X-ray Images. 4ème Conférence Internationale sur la Vision Artificielle, Tizi-Ouzou, Novembre 2021. : IEEE, http://labs.ummto.dz/lvaas/CVA2021/programme.html
2020
Marwa Zaabi, Nadia Smaoui, Houda Derbel, Walid Hariri. (2020), Alzheimer’s disease detection using convolutional neural networks and transfer learning-based methods. 17th IEEE International Multi-Conference on Systems, Signals & Devices : IEEE, https://ieeexplore.ieee.org/document/9364155
Walid Hariri, Nadir Farah . (2020), Deep Feature Quantization for 3D Face Recognition. 6th International Conference of Computing for Engineering and Sciences : Scopus,
Walid Hariri, Nadir Farah. (2020), Efficient Graph-based Kernel using Covariance Descriptors for 3D Facial Expression Classification. International conference on Intelligent Systems and Pattern recognition : ACM, https://doi.org/10.1145/3432867.3432890
2017
Walid Hariri, Hedi Tabia, Nadir Farah, David Declercq, Abdallah Benouareth. (2017), Geometrical and Visual Feature Quantization for 3D Face Recognition. International Conference on Computer Vision Theory and Applications : SCITEPRESS Digital Library Ethics of Publication, https://tel.archives-ouvertes.fr/ETIS-MIDI/hal-01742753

Résumé: In this paper, we present an efficient method for 3D face recognition based on vector quantization of both geometrical and visual proprieties of the face. The method starts by describing each 3D face using a set of orderless features, and use then the Bag-of-Features paradigm to construct the face signature. We analyze the performance of three well-known classifiers: the Na¨ıve Bayes, the Multilayer perceptron and the Random forests. The results reported on the FRGCv2 dataset show the effectiveness of our approach and prove that the method is robust to facial expression

2016
Walid Hariri, Hedi Tabia, Nadir Farah, Abdallah Benouareth, David Declercq. (2016), Hierarchical Covariance Description for 3D Face Matching and Recognition Under Expression Variation. International Conference on 3D Imaging (IC3D) : IEEE, https://ieeexplore.ieee.org/abstract/document/7823458

Résumé: In this paper, we propose a hierarchical covariance description for 3D face matching and recognition under expression variation. Unlike feature-based vectors, covariance-based descriptors enable the fusion and the encoding of different types of features and modalities into a compact representation. The efficiency of covariance descriptors however may depend on the size of its region of definition. On the one hand, co-varying features in a small region do not capture sufficient properties of the face. On the other hand, large regions only capture coarse features, which may not be sufficiently discriminative. In this paper, we propose to represent a 3D face using a set of feature points. Around each feature point, we consider three covariance description levels. In our experiments, we demonstrate the utility of this representation and present challenging results on different datasets including the BU-3DFE and the GAVAB datasets.

Walid Hariri. (2016), La reconnaissance de visages 3D en utilisant des descripteurs de covariance . GdR-ISIS, Journée de l'AS Visage, geste, action et comportement, 2016. : GdR-ISIS, http://www.gdr-isis.fr/index.php?page=reunion&idreunion=323

Résumé: Dans ce travail, nous proposons une méthode de reconnaissance de visages 3D basée sur les matrices de covariance. Contrairement aux approches classiques, les descripteurs de covariance offrent la possibilité de fusionner plusieurs caractéristiques et modalités dans une seule représentation compacte. Les matrices de covariance forment une variété riemannienne (Symd+). Nous proposons ainsi d'exploiter la distance géodésique définie dans cette variété pour quantifier leurs similarités. Pour comparer deux visages, nous calculons la distance entre leurs paires de régions homologues. Nous avons évalué notre méthode sur deux bases de visages de référence FRGCv2 et GAVAB. Les résultats obtenus démontrent la supériorité de notre méthode comparée à plusieurs méthodes de l'état de l'art.

Communications nationales

2023
Walid Hariri et al.. (2023), A Comprehensive Survey on Media Security using Deep learning–based Techniques. ملتقى وطني حول ثنائية الانكشاف الإعلامي / الأمن الإعلامي ومتطلبات التأمين والحماية من مخاطر المعلوماتية : جامعة أم البواقي, http://www.univ-oeb.dz/fr/

Résumé: In the recent years, media security and information risks in digital spaces are a growing concern for individuals and organizations worldwide. Risks to media security have changed over time as cybercriminals have developed more effective attack strategies. Some of the most prevalent types of cyber dangers include malware, phishing schemes, ransomware attacks, and social engineering attacks. Cybercriminals are increasingly using artificial intelligence and machine learning algorithms, which makes it more difficult to identify and stop these attacks. To safeguard their digital assets, companies and people are investing more and more in cybersecurity solutions. A few examples of this are the use of firewalls, antivirus software, intrusion detection systems, and encryption technology. Using security frameworks like the National Institute of Standards and Technology Cybersecurity Framework (NIST CSF) and ISO/IEC 27001 can also aid firms in creating a thorough security posture. However, there is no assurance that these measures will be effective, so it is important to use a combination of different techniques and approaches, including artificial intelligence to reduce the risks and improve overall security. Artificial intelligence (AI) has emerged as a crucial tool for media security and reducing information threats in digital spaces. Effective security measures are more important than ever as the world relies more and more on digital platforms. AI has emerged as a powerful technology for addressing this challenge by providing advanced threat detection and prevention capabilities, automating content filtering and moderation, improving user behavior analysis, enabling identity verification, and facilitating anomaly detection. Deep learning (DL) considered as one of the existing approaches for detecting malicious content in images and videos , including spam, phishing, and fake news, this is confirmed by the massive studies used DL strategy in the literature. In this paper, efforts have been made to provide a recent overview of the existing methods using DL approaches to enhance media security in digital spaces.

2021
Walid Hariri, M Zaabi. (2021), Deep Residual Feature Quantization for 3D Face Recognition. 1st National Conference on Applied Computing and Smart Technologieshttps://acst.esi-sba.dz/

Résumé: 3D face recognition (FR) has been successfully applied using Convolutional neural networks (CNN) which have demonstrated stunning results in diverse computer vision and image classification tasks. Learning CNNs, however, need to estimate millions of parameters that expect high-performance computing capacity and storage. To deal with this issue, we propose an efficient method based on the quantization of residual features extracted from ResNet-50 pre-trained model. The method starts by describing each 3D face using a convolutional feature extraction block, and then apply the Bag-of-Features (BoF) paradigm to learn deep neural networks (we call it Deep BoF). To do so, we apply Radial Basis Function (RBF) neurons to quantize the deep features extracted from the last convolutional layers. An SVM classifier is then applied to classify faces according to their quantized term vectors. The obtained model is lightweight comparing to classical CNN and it allows classifying arbitrary-sized images. The experimental results on the FRGCv2 and Bosphorus datasets show the power of our method comparing to the state-of-the-art methods.