Yazar "Horasan, Fahrettin" seçeneğine göre listele
Listeleniyor 1 - 15 / 15
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A novel image watermarking scheme using ULV decomposition(Elsevier Gmbh, 2022) Horasan, FahrettinMatrix decompositions play an important role in most of watermarking techniques. Especially Singular Value Decomposition (SVD) is one of the most preferred techniques. In addition, matrix decompositions such as Non-negative Matrix Factorization (NMF), QR, LU are some of the methods used in previous studies. In this study, a new scheme using ULV Decomposition (ULVD) technique is proposed as an alternative matrix decomposition. In this scheme, which is a frequency-based technique using R-level Discrete Wavelet Transform (DWT), the scaling factor is determined adaptively according to the cover image and watermark used. Another issue to be solved in watermarking studies is the False Positive Problem (FFP). For this purpose, a control mechanism is used against the FFP in watermark embedding and watermark extracting processes. In addition, the watermarking process is carried out in any size, depending on the size of the cover image. As a result, the performed experiments show that the proposed scheme provides high imperceptibility and robustness.Öğe A novel model based collaborative filtering recommender system via truncated ULV decomposition(Elsevier, 2023) Horasan, Fahrettin; Yurttakal, Ahmet Hasim; Gunduz, SelcukCollaborative filtering is a technique that takes into account the common characteristics of users and items in recommender systems. Matrix decompositions are one of the most used techniques in collabo-rative filtering based recommendation systems. Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) based approaches are widely used. Although they are quite good at dealing with the scalability problem, their complexities are high. In this study, the Truncated-ULV decomposition (T-ULVD) technique was used as an alternative technique to improve the accuracy and quality of recom-mendations. The proposed method has been tested with Movielens 100 k, Movielens 1 M, Filmtrust, and Netflix datasets, which are widely used in recommender system researches. In order to assess the perfor-mance of the proposed model, standart metrics (MAE, RMSE, precision, recall, and F1 score) were used. It is seen that while progress was achieved in all experiments with the T-ULVD compared to the NMF, very close or better results were obtained compared to the SVD. Moreover, this study may guide T-ULVD based future studies on solving the cold-start problem and reducing the sparsity in collaborative filtering based recommender systems.& COPY; 2023 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Öğe Alternate Low-Rank Matrix Approximation in Latent Semantic Analysis(Hindawi Ltd, 2019) Horasan, Fahrettin; Erbay, Hasan; Varcin, Fatih; Deniz, EmreThe latent semantic analysis (LSA) is a mathematical/statistical way of discovering hidden concepts between terms and documents or within a document collection (i.e., a large corpus of text). Each document of the corpus and terms are expressed as a vector with elements corresponding to these concepts to form a term-document matrix. Then, the LSA uses a low-rank approximation to the term-document matrix in order to remove irrelevant information, to extract more important relations, and to reduce the computational time. The irrelevant information is called as noise and does not have a noteworthy effect on the meaning of the document collection. This is an essential step in the LSA. The singular value decomposition (SVD) has been the main tool obtaining the low-rank approximation in the LSA. Since the document collection is dynamic (i.e., the term-document matrix is subject to repeated updates), we need to renew the approximation. This can be done via recomputing the SVD or updating the SVD. However, the computational time of recomputing or updating the SVD of the term-document matrix is very high when adding new terms and/or documents to preexisting document collection. Therefore, this issue opened the door of using other matrix decompositions for the LSA as ULV- and URV-based decompositions. This study shows that the truncated ULV decomposition (TULVD) is a good alternative to the SVD in the LSA modeling.Öğe Alternatif düşük ranklı matris ayrışımı ile gizli anlamsal dizinleme(Kırıkkale Üniversitesi, 2018) Horasan, Fahrettin; Erbay, HasanKullanım alanı sürekli genişleyen bilgisayarlar tarafından dijital ortamda depolanan verilerin boyutları günden güne büyümektedir. Ancak bu veriler işlenmediği ya da analiz edilmediği sürece sadece bir arşivden ibarettir. Bu nedenle, istatistikçiler, ekonomistler, iş planlayıcıları, reklam analistleri ve iletişim mühendisleri gibi birçok sektör çalışanları bu depolanan verilerden anlamlı bilgiler elde etmek amacıyla sürekli araştırma ve geliştirme yapmaktadırlar. Araştırmacılar temel olarak büyük veri yığınlarından genel bir sonuca ulaşma, bilinen ya da bilinmeyen problemleri bulma, bu problemleri çözme, problem çözüm yöntemleri geliştirme, yapılabilecek bir değişikliğin etkisini tahmin etme, işlem ve deneylerini zamandan ve veri kaynaklarından bağımsız olarak yapabilmenin yollarını araştırmaktadırlar. Bu çalışmada ise, devasa doküman yığını içerisinden istenilen dokümanlara ve/veya bilgilere doğru bir şekilde erişmeyi amaçlamayan bilgiye erişim sistemlerinden biri olan Gizli Anlamsal Dizinleme (GAD) yönteminde kullanılan Tekil Değer Ayrışımına (TDA) alternatif bir düşük ranklı matris ayrışımı önerilmektedir. GAD modelinde, doküman yığını içerisindeki her bir terim ve bu terimleri içeren dokümanlar lineer cebir yöntemleri ile sayısallaştırılarak bir vektör uzayında temsil edilmektedir. Vektör uzayının elde edilmesinde kullanılan genel yöntem ise TDA'dır. Ancak TDA ile gerçekleştirilen bu işlemin hesaplama ve hafıza açısından çok maliyetli olması araştırmacıları alternatif yöntemlere yönlendirmektedir Düşük ranklı matris ayrışımı olarak önerilen Kesik ULV Ayrışımı ile (K-ULVA) vektör uzayının elde edilme sürecindeki maliyet TDA'ya göre daha düşüktür. Ayrıca, doküman yığınına eklenecek yeni dokümanların temsili için yapılan blok güncelleme sürecinin kolay ve maliyetinin az olması K-ULVA'nın bir diğer avantajıdır. K-ULVA ve TDA ile yapılan iki ayrı GAD sistemini karşılaştırılmak amacıyla bilgiye erişim çalışmalarında yaygın olarak kullanılan veri setleri tercih edilmiştir. Son olarak, bir bot yazılımı kullanarak Türkçe haber sayfalarından elde edilen haber metinleri ile Türkçe bir veri seti geliştirilmiş ve bu iki GAD sisteminin bu veri seti üzerindeki performansı da gözlemlenmiştir. Yapılan incelemeler sonucunda K-ULVA ve TDA tabanlı dizinleme modellerinin tüm veri setlerindeki başarılarının oldukça benzer olduğu görülmüştür. K-ULVA yönteminin blok güncelleme yöntemindeki kolaylığı ve maliyetinin az olması sebebiyle TDA yöntemine iyi alternatif matris ayrışımı olduğu sonucuna varılmıştır.Öğe Block classical Gram-Schmidt-based block updating in low-rank matrix approximation(Scientific Technical Research Council Turkey-Tubitak, 2018) Erbay, Hasan; Varcin, Fatih; Horasan, Fahrettin; Bicer, CenkerLow-rank matrix approximations have recently gained broad popularity in scientific computing areas. They are used to extract correlations and remove noise from matrix-structured data with limited loss of information. Truncated singular value decomposition (SVD) is the main tool for computing low-rank approximation. However, in applications such as latent semantic indexing where document collections are dynamic over time, i.e. the term document matrix is subject to repeated updates, SVD becomes prohibitive due to the high computational expense. Alternative decompositions have been proposed for these applications such as low-rank ULV/URV decompositions and truncated ULV decomposition. Herein, we propose a BLAS-3 compatible block updating truncated ULV decomposition algorithm based on the block classical Gram-Schmidt process. The simulation results presented show that the block update algorithm is promising.Öğe Darknet Web Traffic Classification via Gradient Boosting Algorithm(Kırıkkale Üniversitesi, 2022) Horasan, Fahrettin; Yurttakal, Ahmet HaşimClassification of network traffic not only contributes to improving the quality of network services of institutions, but also helps to protect important data. Machine learning algorithms are frequently used in the classification of network traffic, since port-based and load-based classification processes are insufficient in encrypted networks. In this study, VPN and Tor network traffic combined in the darknet category was classified with the Gradient Boosting Algorithm. 70% of the dataset is reserved for training and 30% for testing. 10 fold cross validation was applied in the training set. Network flows in 8 different categories: Audio-Streaming, Browsing, Chat, E-mail, P2P, File Transfer, Video-Streaming and VOIP were classified with 99.8% accuracy. The proposed method automated the process of network analysis from the darknet. It enabled organizations to protect their important data with high accuracy in a short time.Öğe Decision Trees in Large Data Sets(Kırıkkale Üniversitesi, 2021) Çetinkaya, Zeynep; Horasan, FahrettinData mining is the process of obtaining information, which is used to identify and define the relationships between data of different qualities. One of the important problems encountered in this process is the classification process in large data sets. Extensive research has been done to find solutions to this classification problem and different solution methods have been introduced. Some decision tree algorithms are among the structures that can be used effectively in this field. In this article, various decision tree structures and algorithms used for classification process in large data sets are discussed. Along with the definitions of the algorithms, the similarities and existing differences between them were determined, their advantages and disadvantages were investigated.Öğe DWT-SVD Based Watermarking for High-Resolution Medical Holographic Images(Wiley-Hindawi, 2022) Horasan, Fahrettin; Pala, Muhammed Ali; Durdu, Ali; Akgul, Akif; Akmese, Omer Faruk; Yildiz, Mustafa ZahidWatermarking is one of the most common techniques used to protect data's authenticity, integrity, and security. The obfuscation in the frequency domain used in the watermarking method makes the watermarking stronger than the obfuscation in the spatial domain. It occupies an important place in watermarking works in imperceptibility, capacity, and robustness. Finding the optimal location to hide the watermarking is one of the most challenging tasks in these methods and affects the method's performance. In this article, sample identification information is processed with the method of watermaking on the hiding environment created by using a chaos-based random number generator on biomedical data to provide solutions to problems such as visual attack, identity theft, and information confusion. In order to obtain biomedical data, a lensless digital in-line holographic microscopy (DIHM) setup was designed, and holographic data of human blood and cancer cell lines, which are widely used in the laboratory environment, were obtained. The standard USAF 1951 target was used to evaluate the resolution of our imaging setup. Various QR codes were generated for medical sample identification, and the captured medical data were processed by watermarking it with chaos-based random number generators. A new method using chaos-based discrete wavelet transform (DWT) and singular value decomposition (SVD) has been developed and applied to high-resolution data to eliminate the problem of encrypted data being directly targeted by third-party attacks. The performance of the proposed new watermarking method has been demonstrated by various robustness and invisibility tests. Experimental results showed that the proposed scheme reached an average PSNR value of 564588 dB and SSIM value of 0.9972 against several geometric and destructive attacks, which means that the proposed method does not affect the image quality and also ensures the security of the watermarking information. The results of the proposed method have shown that it can be used efficiently in various fields.Öğe Kesik Tekil Değer Ayrışımı ve Ayrık Dalgacık Dönüşümü Kullanılarak Boyut İndirgeme Tabanlı Dayanıklı Dijital Görüntü Damgalama(2022) Yurttakal, Ahmet Haşim; Horasan, FahrettinTelif hakkı koruma, kimlik doğrulama, parmak izi, içerik etiketleme gibi alanlarda kullanılan damgalama tekniklerinde genel olarak sinyal işleme dönüşümleri ve matematiksel teknikler kullanılır. Bu araştırmada çoğu damgalama tekniğinde tercih edilen Tekil Değer Ayrışımı (TDA) yerine, boyut indirgeme tabanlı Kesik-TDA tekniği kullanılmıştır. Önerilen bu teknik Ayrık Dalgacık Dönüşümü (ADD) ile birlikte kullanılmıştır. Temel TDA-ADD tabanlı yönteme göre önerilen yöntemin histogram eşitleme dışında tüm olası saldırılara karşı algılanamazlık ve dayanıklılık performanslarında ilerleme kaydettiği gözlenmiştir. Önerilen şemanın farklı matris ayrışımı ve sinyal işleme dönüşümlerinin kullanıldığı alternatif damgalama şemalarına yön vereceği tahmin edilmektedir.Öğe Keyword Extraction for Search Engine Optimization Using Latent Semantic Analysis(Gazi Univ, 2021) Horasan, FahrettinIt is now difficult to access desired information in the Internet world. Search engines are always trying to overcome this difficulty. However, web pages that cannot reach their target audience in search engines cannot become popular. For this reason, search engine optimization is done to increase the visibility in search engines. In this process, a few keywords are selected from the textual content added to the web page. A responsible person who is knowledgeable about the content and search engine optimization is required to determine these words. Otherwise, an effective optimization study cannot be obtained. In this study, the keyword extraction from textual data with latent semantic analysis technique was performed. The latent semantic analysis technique models the relations between documents/sentences and terms in the text using linear algebra. According to the similarity values of the terms in the resulting vector space, the words that best represent the text are listed. This allows people without knowledge of the SEO process and content to add content that complies with the SEO criteria. Thus, with this method, both financial expenses are reduced and the opportunity to reach the target audience of web pages is provided.Öğe Latent Semantic Analysis via Truncated ULV Decomposition(Ieee, 2016) Varcin, Fatih; Erbay, Hasan; Horasan, FahrettinLatent semantic analysis (LSA) usually uses the singular value decomposition (SVD) of the term-document matrix for discovering the latent relationships within the document collection. With the SVD, by disregarding the smaller singular values of the term-document matrix a vector space cleaned from noises that distort the meaning is obtained. The latent semantic structure of the terms and documents is obtained by examining the relationship of representative vectors in the vector space. However, the computational time of re-computing or updating the SVD of the term-document is high when adding new terms and/or documents to pre-existing document collection. Thus, the need a method not only has low computational complexity but also creates the correct semantic structure when updating the latent semantic structure is arisen. This study shows that the truncated ULV decomposition is a good alternative to the SVD in LSA modelling about cost and producing the correct semantic structure.Öğe Latent Semantic Indexing-Based Hybrid Collaborative Filtering for Recommender Systems(Springer Heidelberg, 2022) Horasan, FahrettinAdvances in information technologies increase the number and diversity of digital objects. This increase poses significant problems in reaching the target audience of digital products. Recommender systems (RS) that propose digital objects according to user profiles aim to deal with these problems. In collaborative recommender systems (CRS), recommendations are made considering similar digital objects. In this study, a hybrid model based on latent semantic indexing (LSI) is proposed for the CRS. User-based, item-based, and hybrid models have been developed by using the LSI, which is generally encountered in text analysis, information retrieval, and information access. These improved models were compared with the models based on the most commonly used Pearson correlation coefficient (PCC) in the CRS. Accordingly, it was observed that predictions were better in all models based on LSI. The developed models have lower computational complexity due to the dimension reduction process. Besides, the proposed hybrid model produced more accurate predictions than the user-based and the item-based models.Öğe LSTM Network Based Sentiment Analysis for Customer Reviews(Gazi Univ, 2022) Bilen, Burhan; Horasan, FahrettinContinuously increasing data bring new problems and problems usually reveal new research areas. One of the new areas is Sentiment Analysis. This field has some difficulties. The fact that people have complex sentiments is the main cause of the difficulty, but this has not prevented the progress of the studies in this field. Sentiment analysis is generally used to obtain information about persons by collecting their texts or expressions. Sentiment analysis can sometimes bring serious benefits. In this study, with singular tag-plural class approach, a binary classification was performed. An LSTM network and several machine learning models were tested. The dataset collected in Turkish, and Stanford Large Movie Reviews datasets were used in this study. Due to the noise in the dataset, the Zemberek NLP Library for Turkic Languages and Regular Expression techniques were used to normalize and clean texts, later, the data were transformed into vector sequences. The preprocessing process made 2% increase to the model performance on the Turkish Customer Reviews dataset. The model was established using an LSTM network. Our model showed better performance than Machine Learning techniques and achieved an accuracy of 90.59% on the Turkish dataset and an accuracy of 89.02% on the IMDB dataset.Öğe Secure Encryption of Biomedical Images Based on Arneodo Chaotic System with the Lowest Fractional-Order Value(Mdpi, 2024) Emin, Berkay; Akgul, Akif; Horasan, Fahrettin; Gokyildirim, Abdullah; Calgan, Haris; Volos, ChristosFractional-order (FO) chaotic systems exhibit richer and more complex dynamic behaviors compared to integer-order ones. This inherent richness and complexity enhance the security of FO chaotic systems against various attacks in image cryptosystems. In the present study, a comprehensive examination of the dynamical characteristics of the fractional-order Arneodo (FOAR) system with cubic nonlinearity is conducted. This investigation involves the analysis of phase planes, bifurcation diagrams, Lyapunov exponential spectra, and spectral entropy. Numerical studies show that the Arneodo chaotic system exhibits chaotic behavior when the lowest fractional-order (FO) value is set to 0.55. In this context, the aim is to securely encrypt biomedical images based on the Arneodo chaotic system with the lowest FO value using the Nvidia Jetson Nano development board. However, though the lowest FO system offers enhanced security in biomedical image encryption due to its richer dynamic behaviors, it necessitates careful consideration of the trade-off between high memory requirements and increasing complexity in encryption algorithms. Within the scope of the study, a novel random number generator (RNG) is designed using the FOAR chaotic system. The randomness of the random numbers is proven by using internationally accepted NIST 800-22 and ENT test suites. A biomedical image encryption application is developed using pseudo-random numbers. The images obtained as a result of the application are evaluated with tests such as histogram, correlation, differential attack, and entropy analyses. As a result of the study, it has been shown that encryption and decryption of biomedical images can be successfully performed on a mobile Nvidia Jetson Nano development card in a secure and fast manner.Öğe Yazılım Projelerindeki Kod Yorum Satırı Klonlarının Evrimsel Derin Öğrenme Ile Tespiti(2023) Öztürk, Muhammed Maruf; Horasan, FahrettinYazılım geliştirme süreci emek yoğun ve özellikle bakım aşaması diğer aşamalara göre daha fazla zaman alan bir süreçtir. Bu süreçte, yazılım tecrübesine bakılmaksızın her geliştirici kod yorumları oluşturabilir. Ancak, kodların kopyalanmasının bakım açısından riskler taşıdığı gibi, kod yorumlarının da eşsiz olmaması yazılım testini olumsuz etkileyen unsurlardan biridir. Nitekim Java gibi bazı programlama dilleri için kod yorumlarından test senaryoları üreten araçlar mevcuttur. Kod klon tespitinde kullanılan kelime ilişki çıkarma yöntemlerinden biri Word2Vec'tir. Ancak, bu yöntem sözlük eksikliğinden kaynaklanan belirsiz çıktıları üretebilmektedir. Diğer taraftan, kod yorum klon tespiti için geliştirilen yöntemlerin büyük çoğunluğu çapraz-dil klon tespitinde etkili değildir. Klon kod yorumlarının tespit edilmesinden sonra karşılaşılan en büyük güçlük klon kod yorumlarının silinmesidir. Nitekim bu süreç manuel olarak geliştiriciye bırakılan bir süreçtir. Geliştirici iki klon kod yorumundan istediğini siler. Ancak, bu işlem için kayna-kopya kod yorum ilişkisinin otomatik olarak çıkarılarak geliştiricinin yönlendirilmesi gerekmektedir. Böylelikle orijinal kod yorum bloku korunmuş olacaktır. Yukarıda bahsedilen problem göz önüne alınarak bu projede Word2Vec sözlük belirsizliğine yardımcı bir algoritma geliştirilmiştir. Algoritmanın özellik çıkarımında genetik algoritma yardımıyla optimizasyon uygulanmıştır. Geliştirilen yöntemin GLMNET algoritması yardımıyla kod klon tespitindeki başarısı gözlemlenmiştir. Farklı programlama dillerindeki kod klonlarının tespitindeki gözlemler şunlardır: 1) Java, C ve C# dilleri için python ve php dilleri ile kıyaslandığında daha yüksek başarı (0.95) elde edilmiştir, 2) Çapraz klon tahminin Java ile eğitilen model C ve php gibi dillerde daha umut verici sonuçlar üretmektedir, 3) Önerilen yöntem tip-1 ve tip-2 kod yorum klonları için daha uygun bir yöntem olarak bulunmuştur.