Publications internationales

Communications internationales

2025
Loubna BOUGHELOUM, Mounir BOUSBIA SALAH, Maamar BETTAYEB. (2025), A Visually Impaired AI Guided Indoor System for Room Classification and Depth-Based Localization”, . The 2025 International Conference on the Leadership and Management of Projects in the digital age (ICLAMP2025) : Springer, https://link.springer.com/conference/iclamp

Résumé: Indoor navigation remains a significant challenge for visually impaired individuals, requiring accurate room classification and precise user localization. In this paper, we propose an AI-guided system that combines Convolutional Neural Networks (CNNs) for room classification with a depth-based Kalman filter for user localization. Our approach leverages RGB images for room classification and depth images for real-time user position estimation, eliminating the need for additional sensors or complex sensor fusion. We implement and evaluate our system using the NYU v2 indoor dataset. The CNN achieves 90% accuracy, outperforming a Multi-Layer Perceptron (MLP), which attains 74% accuracy. For user localization, we compare a depth-based Kalman filter with a Particle Filter. The Kalman filter demonstrates lower Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), ensuring more precise and stable position estimates. While the system is not yet deployed in real-world applications, our findings highlight its potential for practical use in smart homes, assistive navigation, and augmented reality.

2024
Loubna BOUGHELOUM, Mounir BOUSBIA SALAH, Maamar BETTAYEB. (2024), Fuzzy Controller-Assisted Obstacle Avoidance System for Visual Impairment”, . The 2024 International Conference on Decision Aid Sciences and Applications (DASA), December 11–12, 2024, Manama, Kingdom of Bahrain. : IEEE, https://www.asu.edu.bh/the-international-conference-on-decision-sciences-and-applications-dasa-2024/

Résumé: Navigating safely and independently through dynamic environments is a crucial concern for the visually impaired community. This paper presents an innovative approach for addressing the obstacle avoidance challenges faced by visually impaired individuals using fuzzy logic-based navigation integrated with audio feedback. This system combines fuzzy logic control with dual of distance laser sensors to enable realtime obstacle detection. The dual sensors setup enhance accuracy in obstacle localization, while fuzzy logic provides adaptive decision-making capabilities. The system's audio response mechanism conveys obstacle proximity information audibly, ensuring effective communication with the user. The system's efficacy is validated through simulations and several testing,affirming its potential to enhance obstacle detection and foster confident navigation for the visually impaired. This work contributes to advancing assistive technologies for visually impaired individuals, improving their mobility, and fostering their autonomy.

Loubna BOUGHELOUM, Mounir BOUSBIA SALAH, Maamar BETTAYEB. (2024), An Advanced Object Detection for the Visually Impaired by using YOLOv5+ . African conference on research in computer science and applied mathematics -Digital Science in Africa (CARI’2024) : Springer, https://link.springer.com/book/10.1007/978-3-031-88226-5

Résumé: This paper presents an improved version of the YOLOv5 architecture adapted specifically for applications in assisting visually impaired individuals. Leveraging advancements in computer vision and deep learning techniques, our modified YOLOv5 model offers enhanced capabilities in object detection and scene understanding, crucial for aiding visually impaired users in navigating and comprehending their surroundings. The modifications to the YOLOv5 architecture include the incorporation of Cross Stage Partial-Pooling with 3x3 filters (C3) blocks in both the backbone and head sections, along with the integration of a Transformer module (C3TR) at the end of the backbone. These enhancements facilitate more effective feature extraction, enabling the model to capture diverse representations of visual stimuli and address complex visual patterns and dependencies inherent in real-world scenes. Furthermore, the modified architecture maintains consistency in feature processing throughout the network, contributing to improved robustness and accuracy in object detection tasks. Through rigorous experimentation and evaluation, we demonstrate the efficacy of our proposed modifications, showcasing notable advancements in the model's ability to detect objects accurately and provide comprehensive scene descriptions. The improved YOLOv5 architecture holds significant promise in empowering visually impaired individuals with enhanced perceptual capabilities, thereby fostering greater independence and accessibility in their daily lives.

Sabah Gamouri, Salah Mounir Bousbia. (2024), Intelligent classification of electrocardiogram signal using extended Kalman Filter based Multi Layer Perceptron Neural Network. 1st international conference on power and modern electrical systemstech.univ-skikda.dz/index.php/fr/e-learning/2-uncategorised/188-first-international-conference-on-power-and-modern-electrical-systems-icpmes-2024-october-29-31-2024-skikda-algeria

Résumé: In this paper we propose an intelligent system for automatic recognition of the heart rate using Extended Kalman Filter based Multi Layer Perceptron Neural Network (EKF-MLPNN). This system can distinguish five different classes of heart rate and which are: normal rhythm (N), Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB), ventricular extrasystoles (ESV) and Atrial Premature Contraction (APC). We execute an algorithm to detect the QRS complex which is firstly implemented. As a second stage, the parameters of the input vectors chosen in addition to detrended fluctuation Analysis (DFA) and heart rate variability (HRV) are the same parameters on which the cardiologist bases his treatment, followed by the development of a classifier based on EKF-MLPNN. The experimental results obtained by testing the proposed approach to the ECG records from the MIT-BIH database show the effectiveness of this approach with a total classifier rate equal to 98.76%