Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Yurdakul, Mustafa" seçeneğine göre listele

Listeleniyor 1 - 6 / 6
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Yükleniyor...
    Küçük Resim
    Öğe
    Abc-based weighted voting deep ensemble learning model for multiple eye disease detection
    (Elsevier Sci Ltd, 2024) Uyar, Kübra; Yurdakul, Mustafa; Taşdemir, Şakir
    Background and objective: The unique organ that provides vision is eye and there are various disorders cause visual impairment. Therefore, the identification of eye diseases in early period is significant to take necessary precautions. Convolutional Neural Network (CNN), successfully used in various imageanalysis problems due to its automatic data-dependent feature learning ability, can be employed with ensemble learning. Methods: A novel approach that combines CNNs with the robustness of ensemble learning to classify eye diseases was designed. From a comprehensive evaluation of fifteen pre-trained CNN models on the Eye Disease Dataset (EDD), three models that exhibited the best classification performance were identified. Instead of employing traditional ensemble methods, these CNN models were integrated using a weighted-voting mechanism, where the contribution of each model was determined based on ABC (Artificial Bee Colony). The core innovation lies in our utilization of the ABC algorithm, a departure from conventional methods, to meticulously derive these optimal weights. This unique integration and optimization process culminates in ABCEnsemble, designed to offer enhanced predictive accuracy and generalization in eye disease classification. Results: To apply weighted-voting and determine the optimized-weights of the best-performing three CNN models, various optimization methods were analyzed. Average values for performance evaluation metrics were obtained with ABCEnsemble as accuracy 98.84%, precision 98.90%, recall 98.84%, and f1-score 98.85% applied to EDD. Conclusions: The eye diseases classification success of 93.17% obtained with DenseNet169 was increased to 98.84% by ABCEnsemble. The design of ABCEnsemble and the experimental findings of the proposed approach provide significant contributions to the related literature.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Almond (Prunus dulcis) varieties classification with genetic designed lightweight CNN architecture
    (Springer, 2024) Yurdakul, Mustafa; Atabaş, İrfan; Taşdemir, Şakir
    Almond (Prunus dulcis) is a nutritious food with a rich content. In addition to consuming as food, it is also used for various purposes in sectors such as medicine, cosmetics and bioenergy. With all these usages, almond has become a globally demanded product. Accurately determining almond variety is crucial for quality assessment and market value. Convolutional Neural Network (CNN) has a great performance in image classification. In this study, a public dataset containing images of four different almond varieties was created. Five well-known and light-weight CNN models (DenseNet121, EfficientNetB0, MobileNet, MobileNet V2, NASNetMobile) were used to classify almond images. Additionally, a model called 'Genetic CNN', which has its hyperparameters determined by Genetic Algorithm, was proposed. Among the well-known and light-weight CNN models, NASNetMobile achieved the most successful result with an accuracy rate of 99.20%, precision of 99.21%, recall of 99.20% and f1-score of 99.19%. Genetic CNN outperformed well-known models with an accuracy rate of 99.55%, precision of 99.56%, recall of 99.55% and f1-score of 99.55%. Furthermore, the Genetic CNN model has a relatively small size and low test time in comparison to other models, with a parameter count of only 1.1 million. Genetic CNN is suitable for embedded and mobile systems and can be used in real-life solutions.
  • [ X ]
    Öğe
    Brain Tumor Detection with Ensemble of Convolutional Neural Networks and Vision Transformer
    (Institute of Electrical and Electronics Engineers Inc., 2023) Yurdakul, Mustafa; Tasdemir, Sakir
    Brain tumors are recognized as one of the most lethal cancer types worldwide. Detecting brain tumors using medical imaging techniques is a challenging task due to their complex anatomical structures. Traditional methods rely on specialists meticulously examining MRI scan images. However, this approach is not only time-consuming but also carries a significant risk of error. Therefore, there is a need for more effective methods to detect brain tumors from MRI images. In this study, an ensemble model was proposed for classifying tumor types using MRI scans. Initially, sixteen well-known Convolutional Neural Network (CNN) models and four Vision Transformer (ViT) models were trained on the Brain Tumor Dataset, which contains 3264 MRI scan images. Subsequently, by combining the top three high-performing models, we achieved a robust classification performance. Experimental results demonstrate that our proposed model provides a satisfactory performance comparedto existing methods. © 2023 IEEE.
  • [ X ]
    Öğe
    Chestnut(Castanea Sativa) Varieties Classification with Harris Hawks Optimization based Selected Features and SVM
    (Institute of Electrical and Electronics Engineers Inc., 2024) Yurdakul, Mustafa; Atabaş, Irfan; Taşdemir, Şakir
    Chestnut(Castanea sativa) is a nutritious food with a hard outer shell. It is also used in different sectors for various purposes. Chestnut is a commercial product that is in demand worldwide due to its multi-purpose use. In order to determine the market value of chestnuts, it is necessary to classify it according to its types. With classical methods, people classify it manually. However, this method is tiring and error prone. In this study, for classifying chestnut varieties, features were extracted from chestnut images using various feature extraction methods. The extracted features were combined and classified with Linear, Poly and Radial Basis Function(RBF) kernels of Support Vector Machine(SVM). The combined handcrafted features and RBF kernel achieved an accuracy of 94.28%, precision of 93.83%, recall of 93.98%, F1-Score of 93.84%, and AUC of 99.25%. Furthermore, the most relevant features were selected using Arithmetic Optimization, Harris Hawks and Sooty Tern algorithms. The Harris Hawks Optimization selected features and RBF kernel showed the best classification performance with an accuracy of 95.84%, precision of 95.56%, recall of 95.51%, F1-score of 95.46% and AUC of 99.45%. © 2024 IEEE.
  • [ X ]
    Öğe
    Flower Pollination Algorithm-Optimized Deep CNN Features for Almond(Prunus dulcis) Classification
    (Institute of Electrical and Electronics Engineers Inc., 2024) Yurdakul, Mustafa; Atabas, Irfan; Tasdemir, Sakir
    Almond is a nut rich in essential nutrients. In addition to being a food, it is also used in cosmetics and the pharmaceutical industry. The market value of almonds is determined according to the quality of the almonds. Manually determining the quality of almonds by humans is a prone to error, time-consuming, and tiring process. In this study, For this reasons, well-known twelve pre-trained CNNs were used to classify almonds as normal and damaged. Then, the most successful model was used as a feature extractor, and the features were classified with various machine learning algorithms. In addition to all these, features were selected by using the FPA algorithm, and the classification process was carried out. Experimental results showed that the use of CNNs as feature extractors and classification with machine learning algorithms can provide better results than the classical softmax structure. In addition, the proposed FPA-based feature extraction increases the classification performance. © 2024 IEEE.
  • [ X ]
    Öğe
    Vision Transformer-based Automatic Detection of COVID-19 in Chest X-ray Images
    (Institute of Electrical and Electronics Engineers Inc., 2023) Yurdakul, Mustafa; Tasdemir, Sakir
    The COVID-19 virus, which first emerged in the city of Wuhan in China, rapidly spread across the globe due to its high contagiousness. Detecting the virus early is crucial to stop its spread and to provide timely treatment to affected individuals. Chest X-ray (CXR) images are a quick, cost- effective, and non-invasive method commonly used for the diagnosis of COVID-19. CXR images are manually inspected by experts for diagnosis. However manually detection is not only time-consuming but also prone to errors due to human fatigue. For these reasons, there is an urgent need for a system that can detect COVID-19 from CXR images. In this study, the Vision Transformer (ViT) model was used to classify Normal, Pneumonia, and COVID-19 from CXR images. Experimental results show that the Vision Transformer (ViT) possesses a robust and high generalization capability, with an accuracy rate of 97%, indicating its significant potential in medical image analysis. © 2023 IEEE.

| Kırıkkale Üniversitesi | Kütüphane | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Kırıkkale Üniversitesi, Kırıkkale, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim