Yazar "Erdal, Erdal" seçeneğine göre listele
Listeleniyor 1 - 19 / 19
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A New Hybrid Approach Using GWO and MFO Algorithms to Detect Network Attack(Tech Science Press, 2023) Dalmaz, Hasan; Erdal, Erdal; Unver, Halil MuratThis paper addresses the urgent need to detect network security attacks, which have increased significantly in recent years, with high accuracy and avoid the adverse effects of these attacks. The intrusion detection system should respond seamlessly to attack patterns and approaches. The use of metaheuristic algorithms in attack detection can produce near-optimal solutions with low computational costs. To achieve better performance of these algorithms and further improve the results, hybridization of algorithms can be used, which leads to more successful results. Nowadays, many studies are conducted on this topic. In this study, a new hybrid approach using Gray Wolf Optimizer (GWO) and Moth-Flame Optimization (MFO) algorithms was developed and applied to widely used data sets such as NSL-KDD, UNSW-NB15, and CIC IDS 2017, as well as various benchmark functions. The ease of hybridization of the GWO algorithm, its simplicity, its ability to perform global optimal search, and the success of the MFO algorithm in obtaining the best solution suggested that an effective solution would be obtained by combining these two algorithms. For these reasons, the developed hybrid algorithm aims to achieve better results by using the good aspects of both the GWO algorithm and the MFO algorithm. In reviewing the results, it was found that a high level of success was achieved in the benchmark functions. It achieved better results in 12 of the 13 benchmark functions compared. In addition, the success rates obtained according to the evaluation criteria in the different data sets are also remarkable. Comparing the 97.4%, 98.3%, and 99.2% classification accuracy results obtained in the NSL-KDD, UNSW-NB15, and CIC IDS 2017 data sets with the studies in the literature, they seem to be quite successful.Öğe Design of a DFS to Manage Big Data in Distance Education Environments(Graz Univ Technolgoy, Inst Information Systems Computer Media-Iicm, 2022) Unver, Mahmut; Erguzen, Atilla; Erdal, ErdalInformation technologies have invaded every aspect of our lives. Distance education was also affected by this phase and became an accepted model of education. The evolution of education into a digital platform has also brought unexpected problems, such as the increase in internet usage, the need for new software and devices that can connect to the Internet. Perhaps the most important of these problems is the management of the large amounts of data generated when all training activities are conducted remotely. Over the past decade, studies have provided important information about the quality of training and the benefits of distance learning. However, Big Data in distance education has been studied only to a limited extent, and to date no clear single solution has been found. In this study, a Distributed File Systems (DFS) is proposed and implemented to manage big data in distance education. The implemented ecosystem mainly contains the elements Dynamic Link Library (DLL), Windows Service Routines and distributed data nodes. DLL codes are required to connect Learning Management System (LMS) with the developed system. 67.72% of the files in the distance education system have small file size (<=16 MB) and 53.10% of the files are smaller than 1 MB. Therefore, a dedicated Big Data management platform was needed to manage and archive small file sizes. The proposed system was designed with a dynamic block structure to address this shortcoming. A serverless architecture has been chosen and implemented to make the platform more robust. Moreover, the developed platform also has compression and encryption features. According to system statistics, each written file was read 8.47 times, and for video archive files, this value was 20.95. In this way, a framework was developed in the Write Once Read Many architecture. A comprehensive performance analysis study was conducted using the operating system, NoSQL, RDBMS and Hadoop. Thus, for file sizes 1 MB and 50 MB, the developed system achieves a response time of 0.95 ms and 22.35 ms, respectively, while Hadoop, a popular DFS, has 4.01 ms and 47.88 ms, respectively.Öğe Developing a Health-Specific File System Using a Distributed File System(Kırıkkale Üniversitesi, 2018) Ergüzen, Atilla; Erdal, Erdal; Ünver, MahmutWith the increase of Internet usage, the use ofdigital devices, especially computers, telephones and tablets has increased. Inaddition, the concept of Internet of Things started to enter our lives with thedeveloping technology. The health sector is one of the sectors most affected bytechnological developments. It has 10% of all world data of digital equipmentused in health sector. And the data generated in this area is increasing day byday. With the introduction of the Internet of Things concept into the field ofhealth, the data size in this area has started to increase very rapidly. Theaim of this study is to develop a health-specific distributed file system. Theproposed method proposes a scalable, robust and serverless distributedarchitecture for storing data in the healthcare field. Security is prioritizedand static IP and encryption methods have been used. The proposed systemprovides high performance. In addition, the name node architecture used in theliterature has a more robust architecture. According to experimental results,the developed system has a better result than 93% according to NoSQL systems,78% according to relational database management systems and 71% compared tooperating systems.Öğe An Efficient Encoding Algorithm Using Local Path on Huffman Encoding Algorithm for Compression(Mdpi, 2019) Erdal, Erdal; Erguzen, AtillaHuffman encoding and arithmetic coding algorithms have shown great potential in the field of image compression. These algorithms are the origin of current image compression techniques. Nevertheless, there are some deficiencies in both algorithms that use the frequencies of the characters in the data. They aim to represent the symbols used in the data in the shortest bit sequence. However, they represent data that has a low frequency of use with very long bit sequences. The arithmetic coding algorithm was developed to address the shortcomings of the Huffman encoding algorithm. This paper proposes an efficient, alternative encoding algorithm that uses the Huffman encoding algorithm. The main objective of the proposed algorithm is to reduce the number of bits that are symbolized with long bit codewords by the Huffman encoding algorithm. Initially, the Huffman encoding algorithm is applied to the data. The characters that are represented by the short bit sequence from the Huffman encoding algorithm are ignored. Flag bits are then added according to whether the successive symbols are on the same leaf. If the next character is not on the same leaf, flag bit "0" is added, otherwise flag bit "1" is added between the characters. In other words, the key significance of this algorithm is that it uses the effective aspects of the Huffman encoding algorithm, and it also proposes a solution to long bit sequences that cannot be efficiently represented. Most importantly, the validity of the algorithm is meticulously evaluated with three different groups of images. Randomly selected images from the USC-SIPI and STARE databases, and randomly selected standard images on internet, are used. The algorithm encodes compressing operations for images successfully. Some images that have a balanced tree structure have yielded close results compared to other algorithms. However, when the total results are inspected, the proposed encoding algorithm achieved excellent results.Öğe An Efficient Middle Layer Platform for Medical Imaging Archives(Hindawi Ltd, 2018) Erguzen, Atilla; Erdal, ErdalDigital medical image usage is common in health services and clinics. These data have a vital importance for diagnosis and treatment; therefore, preservation, protection, and archiving of these data are a challenge. Rapidly growing file sizes differentiated data formats and increasing number of files constitute big data, which traditional systems do not have the capability to process and store these data. This study investigates an efficient middle layer platform based on Hadoop and MongoDB architecture using the state-of-the-art technologies in the literature. We have developed this system to improve the medical image compression method that we have developed before to create a middle layer platform that performs data compression and archiving operations. With this study, a platform using MapReduce programming model on Hadoop has been developed that can be scalable. MongoDB, a NoSQL database, has been used to satisfy performance requirements of the platform. A four-node Hadoop cluster has been built to evaluate the developed platform and execute distributed MapReduce algorithms. The actual patient medical images have been used to validate the performance of the platform. The processing of test images takes 15,599 seconds on a single node, but on the developed platform, this takes 8,153 seconds. Moreover, due to the medical imaging processing package used in the proposed method, the compression ratio values produced for the non-ROI image are between 92.12% and 97.84%. In conclusion, the proposed platform provides a cloud-based integrated solution to the medical image archiving problem.Öğe Estimation of the FRP-concrete bond strength with code formulations and machine learning algorithms(Elsevier Sci Ltd, 2021) Basaran, Bogachan; Kalkan, Ilker; Bergil, Erhan; Erdal, ErdalThe present study pertains to the bond strength and development length of FRP bars embedded in concrete. The experimental results in the literature were compared to the analytical estimates from the equations of different international codes and machine learning techniques, i.e. Gaussian Process Regression, Artificial Neural Networks, Support Vector Machines Regression, Regression Tree and Multiple Linear Regression. The comparison was realized for four different experimental methods, i.e. hinged beam, beam-end, spliced beam and pullout, to specify the analytical equation or method with the highest agreement with the test results for each method. GPR method was found to provide the highest accuracy with a mean value of 0.95 and a standard deviation of 0.14 for the predicted-to-experimental bond strength ratio. Based on coefficient of determination, Root Mean Square Error and Mean Absolute Percentage Error statistical criteria, GRP method was followed by ANN, MLR and SVMR based on the agreement with the experimental results. Among the code equations, the bond strength equation of the ACI 440.1R-15 code resulted in highest agreement with experimental results, but the predicted values remained on the over-conservative side. The other code formulations were established to yield to estimates, nearly constant for varying test parameters and highly conservative.Öğe Fog Computing Based Signature Verification: A Scenario-Based Approach(Kırıkkale Üniversitesi, 2019) Erdal, ErdalToday, the development of technology greatly facilitates our lives in all areas. However, transactions over the internet bring about the security threat. For this reason, controls and studies are carried out to prevent unauthorized access to personal data. One of the most important of these controls is the signature information received from the users. However, since the signature information can be simulated and played, the visual control is insufficient. For this reason, signature-specific characteristic information is the most accurate approach to be signed out, recorded and compared with subsequent signatures. Such transactions have been made and developed on cloud computing. Since all data is sent and shared over the Internet in traditional cloud computing architecture, it has disadvantages such as bandwidth, energy consumption and security. Therefore, fog information architecture has been improved and the deficiencies in traditional cloud computing have been largely eliminated. In this study, fuzz computing-based signature verification approach has been developed. A scenario has been developed and evaluated in this scenario for banks from institutions where security is handled intensively. As a result of the study; a more secure signature verification framework has been developed compared to traditional cloud computing. The study will lead the studies to be carried out.Öğe Huffman-based lossless image encoding scheme(Spie-Soc Photo-Optical Instrumentation Engineers, 2021) Erdal, ErdalThe data produced in today's Internet and computer world are expanding their wings day by day. With the passage of time, storage and archiving of this data are becoming a significant problem. To overcome this problem, attempts have been made to reduce data sizes using compression methods. Therefore, compression algorithms have received great attention. In this study, two efficient encoding algorithms are presented and explained in a crystal-clear manner. In all compression algorithms, frequency modulation is used. In this way, the characters with the highest frequency after each character are determined and the Huffman encoding algorithm is applied to them. In this study, the compression ratio (CR) is 49.44%. Moreover, 30 randomly selected images in three different datasets consisting of USC-SIPI, UCID, and STARE databases have been used to evaluate the performance of the algorithms. Consequently, excellent results have been obtained in all test images according to well-known comparison algorithms such as the Huffman encoding algorithm, arithmetic coding algorithm, and LPHEA. (C) 2021 SPIE and IS&TÖğe Improving Technological Infrastructure of Distance Education through Trustworthy Platform-Independent Virtual Software Application Pools(Mdpi, 2021) Erguzen, Atilla; Erdal, Erdal; Unver, Mahmut; Ozcan, AhmetDistance education (DE), which has evolved under the wings of information technologies in the last decade, has become a fundamental part of our modern education system. DE has not only replaced the traditional education method as in social sciences and lifelong learning opportunities but also has significantly strengthened traditional education in mathematics, science, and engineering fields that require practical and intensive study. However, it is deprived of supporting some key elements found in traditional educational approaches such as (i) modern computer laboratories with installed special software suitable for the student's field of interest; (ii) adequate staff for maintenance and proper functioning of laboratories; (iii) face-to-face technical support; (iv) license fees. For students to overcome these shortcomings, a virtual application pool is needed where they can easily access all the necessary applications via remote access. This research aims to develop a platform-independent virtual laboratory environment for DE students. This article has been developed specifically to guide DE institutions and to make a positive contribution to the literature. Technology Acceptance Model (TAM) has been used to explain student behaviors. It was concluded that students using the platform performed more successful grades (12.89%) on laboratory assessments and that the students using the developed platform were found to be more satisfied with the education process.Öğe Improving technological infrastructure of distance education through trustworthy platform?independent virtual software application pools(MDPI AG, 2021) Ergüzen, Atilla; Erdal, Erdal; Ünver, Mahmut; Özcan, AhmetDistance education (DE), which has evolved under the wings of information technologies in the last decade, has become a fundamental part of our modern education system. DE has not only replaced the traditional education method as in social sciences and lifelong learning opportunities but also has significantly strengthened traditional education in mathematics, science, and engineering fields that require practical and intensive study. However, it is deprived of supporting some key elements found in traditional educational approaches such as (i) modern computer laboratories with installed special software suitable for the student’s field of interest; (ii) adequate staff for mainte-nance and proper functioning of laboratories; (iii) face?to?face technical support; (iv) license fees. For students to overcome these shortcomings, a virtual application pool is needed where they can easily access all the necessary applications via remote access. This research aims to develop a plat-form?independent virtual laboratory environment for DE students. This article has been developed specifically to guide DE institutions and to make a positive contribution to the literature. Technology Acceptance Model (TAM) has been used to explain student behaviors. It was concluded that students using the platform performed more successful grades (12.89%) on laboratory assessments and that the students using the developed platform were found to be more satisfied with the education process. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.Öğe Internet of Things (IoT)(Kırıkkale Üniversitesi, 2020) Erdal, Erdal; Ergüzen, AtillaThe concept of internet has been accepted since the day it is part of a human life, but with the technological developments experienced, it has emerged as the IoT concept. IoT, which has gained popularity in recent years and is preferred by prominent researchers; It has basic functions on objects by connecting to the internet such as managing, controlling, and transferring data. However, the application of IoT to different areas has increased day by day. This scope of work; a detailed literature review has been made in the field of IoT and the final point has been determined, the basic goal and concept of the IoT concept has been evaluated, its development from the past to the present has been examined, the evolution and steps of the Internet to IoT have been examined in detail, the communication and infrastructure technologies used in IoT are detailed and explained, The application areas and uses are investigated in detail, as well as clear issues, challenges and future research directions in IoT are identified.Öğe Kinect Uygulamaları için Veri Transfer Platformu Tasarımı(2019) Erdal, Erdal; Ergüzen, AtillaSon yıllarda yazılım, donanım ve algoritma konularında büyük gelişmeler meydana gelmiştir. Teknolojide yaşanan bu gelişmeler sensör teknolojilerini de etkilemiştir. Başlangıçta bir oyun cihazı olarak piyasaya sürülen Kinect sensör gerek araştırmacılar gerek geliştiriciler tarafından büyük ilgiyle karşılanmıştır. Kinect sensör literatürde farklı alanlarda farklı amaçlar için kullanılmıştır. Kinect sensörden alınan tüm veriler Microsoft tarafından geliştirilen Yazılım Geliştirme Kiti (YGK) ile geliştiricilere iletilmektedir. Kinect sensörü sahne karmaşıklığına göre değişmek üzere normal durumlarda her saniyede 240 bin ile 270 bin nokta verisi üretmektedir. Bu çalışmanın amacı Kinect uygulamaları için veri transfer platformu tasarlanmasıdır. Geliştirilen platform istemci sunucu mimarisi üzerinde çalışmaktadır. Çevrimiçi ve çevrimdışı haberleşme durumlara uygun farklı senaryolar barındıran platform, aynı zamanda bir dizi filtreleme ve şifreleme algoritmalarını da sunmaktadır. Platformda 2D/3D görüntü ve nokta bulutu işleme için büyük ölçekli, açık kaynaklı bir proje olan Nokta Bulut Kütüphanesi (NBK) kullanılmıştır. İsteğe bağlı olarak VoxelGrid (VG) Filtre, Outlier Filtre, Histogram Tabanlı Koşullu Filtre, Octree-tabanlı Sıkıştırma ve PGP Şifreleme yöntemlerini de barındırmaktadır. Ayrıca Kinect uygulamalarına özel bir veri yapısı da geliştirilmiştir. Çevrimiçi haberleşme için WebRTC ara katman yazılımı kullanılmıştır. Tüm bu aşamalar sonucunda gereksiz veri noktaları temizlenmiş, sıkıştırılmış, güvenli hale getirilmiş ve geliştirilen veri yapısına uygun veri paketleri elde edilmiştir. Filtrelemeler sonucunda % 19.96 sıkıştırma oranı elde edilmiştir. İsteğe bağlı tasarım sayesinde uygulama veya istemci bazlı filtreleme sağlanmıştır. Filtrelemeler sonrasında uygulanan dosya sıkıştırma yaklaşımı ile % 10.38 oranında dosya sıkıştırma sonucu da elde edilmiştir. Sunulan platform araştırmacılar ve geliştiriciler tarafından kullanılan Kinect uygulamalarında performans sağlayacaktır.Öğe Lateral torsional buckling of doubly-symmetric steel cellular I-Beams(Techno-Press, 2023) Ertenli, Mehmet Fethi; Erdal, Erdal; Buyukkaragoz, Alper; Kalkan, Ilker; Aksoylu, Ceyhun; Ozkilic, Yasin OnuralpThe absence of an important portion of the web plate in steel beams with multiple circular perforations, cellular beams, causes the web plate to undergo distortions prior to and during lateral torsional buckling (LTB). The conventional LTB equations in the codes and literature underestimate the buckling moments of cellular beams due to web distortions. The present study is an attempt to develop analytical methods for estimating the elastic buckling moments of cellular beams. The proposed methods rely on the reductions in the torsional and warping rigidities of the beams due to web distortions and the reductions in the weak-axis bending and torsional rigidities due to the presence of web openings. To test the accuracy of the analytical estimates from proposed solutions, a total of 114 finite element analyses were conducted for six different standard IPEO sections and varying unbraced lengths within the elastic limits. These analyses clearly indicated that the LTB solutions in the AISC 360-16 and AS4100:2020 codes overestimate the buckling loads of cellular beams within elastic limits, particularly at shorter span lengths. The LDB solutions in the literature and the Eurocode 3 LTB solution, on the other hand, provided conservative buckling moment estimates along the entire range of elastic buckling.Öğe Machine Learning Approaches in Detecting Network Attacks(Institute of Electrical and Electronics Engineers Inc., 2021) Dalmaz, Hasan; Erdal, Erdal; Ünver, Halil MuratDeveloping technology brings many risk in terms of data security. In this regard, it is an important issue to detect attacks for network security. Intrusion detection systems developed due to technological developlments and increasing attack diversity have revealed the necessity of being more succesful in detecting attacks. Today, many studies are carried out on this subject. When the literature is examined, there are various studies with varying success rates in detecting network attacks using machine learning approaches. In this study, the NSL-KDD dataset was explained in detail, the positive aspects of the KDD Cup 99 dataset were specified, the classifier used, performance criteria and the success results obtained were evaluated. In addition, the developed GWO-MFO hybrid algorithm is mentioned and the result is shared. © 2021 IEEEÖğe Medical Image Archiving System Implementation with Lossless Region of Interest and Optical Character Recognition(Amer Scientific Publishers, 2017) Erguzen, Atilla; Erdal, ErdalDigital medical images have been widely used in all stages of healthcare. It has a vital role to transfer and to store digital medical images for medical experts and patients. Since the large file sizes and storage space requirements, image compression has become a necessity. Instead of compressing the entire image, it is an option to compress the region of interest (ROI). Applying lossless methods to the whole image does not provide a sufficient advantage, however, when lossy techniques are used, the vital information of the medical image may be lost. In this study, a novel medical image archiving system implementation based on ROI and Optical Character Recognition (OCR) is proposed. Besides, a new dynamic file structure was used that was specially designed to produce better compression ratio and performance. The medical image is separated into the ROI and the non-ROI parts. JPEG-LS, a lossless compression algorithm, is applied to the ROI segment of the medical image. The OCR and Huffman coding algorithm is used for the non-ROI part of the image. The proposed method was evaluated using medical images of the actual patient and the produced compression ratio for the non-ROI image is between 92.12% and 97.84%. The average difference between the proposed method and the state-of-art in the literature is 83.80% for the non-ROI part. In conclusion, the proposed method provides an integrated solution to the medical image archiving problem.Öğe Medikal görüntülerde ilgi duyulan bölge analizi ve yeni paralel sıkıştırma yöntemi geliştirilmesi(Kırıkkale Üniversitesi, 2017) Erdal, Erdal; Ergüzen, AtillaDijital medikal görüntüler, sağlık hizmetlerinin her aşamasında yaygın şekilde kullanılmaktadır. Bu medikal görüntüleri aktarmak ve depolamak tıbbi uzmanlar ve hastalar için hayati role sahiptir. Gün geçtikçe artan büyük dosya boyutları ve depolama alanı gereksinimleri nedeniyle görüntü sıkıştırma bir zorunluluk haline gelmiştir. Görüntünün tamamını sıkıştırmak yerine, ilgi alanını (region-of-interest - ROI) sıkıştırmak bu alanda alternatif bir seçenektir. Tüm görüntüye kayıpsız sıkıştırma yöntemleri uygulanması yeterli avantaj sağlamaz, ancak, kayıplı teknikler kullanıldığında da tıbbi görüntünün hayati öneme sahip bilgileri kaybolabilir. Bu tezde ROI ve Optik Karakter Tanıma (Optical Character Recognition - OCR) temelli yeni bir tıbbi görüntü arşivleme sistemi uygulaması önerilmiştir. Ayrıca, daha iyi sıkıştırma oranı ve performansı üretmek üzere özel olarak tasarlanmış yeni bir dinamik dosya yapısı kullanılmıştır. Medikal görüntü ROI ve non-ROI olmayan kısımlara ayrılmıştır. Medikal görüntünün ROI bölümüne kayıpsız sıkıştırma algoritması olan JPEG-LS uygulanmış, OCR ve Huffman kodlama algoritmaları ise görüntünün non-ROI kısmı için kullanılmıştır. Geliştirilen yöntem gerçek hastaya ait beyin MR görüntüleri kullanılarak değerlendirilmiş ve görüntünün non-ROI bölümü için elde edilen sıkıştırma oranı %92,12 ile %97,84 olarak tespit edilmiştir. Önerilen yöntem ile literatürdeki son teknoloji arasındaki ortalama fark ROI dışı kısım için %83,80 tespit edilmiştir. Sonuç olarak, önerilen yöntem medikal görüntü arşivleme problemine entegre bir çözüm sunmaktadır. Çalışmanın ikinci aşamasında, Hadoop tabanlı MapReduce programlama modelini kullanan bir platform geliştirilmiştir. Platformun performans gereksinimlerini karşılamak için NoSQL veritabanı olan MongoDB kullanılmıştır. Dört düğümlü bir Hadoop kümesi platforma kurulmuştur. Bulut tabanlı çalıştırılan algoritmalar, tek düğümden daha verimli veri işleme yetkinliğine sahiptirler. Platformun performansını doğrulamak için aynı test görüntüleri kullanılmıştır. Test görüntülerinin işlenmesi tek bir düğüm üzerinde 15,599 saniye sürmektedir, ancak bulut tabanlı geliştirilen platformda bu değer 8,153 saniye olarak tespit edilmiştir. Sonuç olarak çalışmanın bu kısmında önerilen yöntem medikal görüntü arşivleme problemine bulut tabanlı bir çözüm sunmaktadır.Öğe Storage Requirement Estimation For Electronic Document Management System With Artificial Neural Networks(Kırıkkale Üniversitesi, 2020) Çetinkaya, Zeynep; Erdal, Erdal; Ergüzen, AtillaElectronic document management systems are defined as the protection and management of the contents, formats and relational features of all kinds of documents created by an institution in the process of carrying out its activities. Storage areas are one of the important elements for electronic document management systems. With every transaction and activity transferred to electronic environment in institutions, the infrastructure and investments that should be allocated for Electronic document management systems storage areas increase and the forecast of this increase becomes more important over time. Artificial neural networks (ANN) approach has been used in many areas in recent years. Estimation studies in different fields have been made with ANN and it has been observed that successful results have been obtained. In this study, an ANN model is proposed to be used in estimating the storage area required for electronic document management systems. In this study using Kırıkkale University Electronic document management systems data, different ANN models were created, the most suitable models were determined, and the required storage area was estimated for the future periods.Öğe Storage Requirement Estimation for Electronic Document Management System With Artificial Neural Networks(2020) Çetinkaya, Zeynep; Erdal, Erdal; Ergüzen, AtillaElectronic document management systems are defined as the protection and management of the contents, formats and relational features of all kinds of documents created by an institution in the process of carrying out its activities. Storage areas are one of the important elements for electronic document management systems. With every transaction and activity transferred to electronic environment in institutions, the infrastructure and investments that should be allocated for Electronic document management systems storage areas increase and the forecast of this increase becomes more important over time. Artificial neural networks (ANN) approach has been used in many areas in recent years. Estimation studies in different fields have been made with ANN and it has been observed that successful results have been obtained. In this study, an ANN model is proposed to be used in estimating the storage area required for electronic document management systems. In this study using Kırıkkale University Electronic documen t management systems data, different ANN models were created, the most suitable models were determined, and the required storage area was estimated for the future periods.Öğe Yardımcı Sistem Olarak BCI ve EEG Sinyallerinin BCI Sistemlerde Kullanım Şekilleri(Kırıkkale Üniversitesi, 2018) Ergüzen, Atilla; Haltaş, Kadir; Erdal, Erdal; Lüy, MURATGünümüzde halen insan anatomisi ve buna bağlı olarakhastalıkların incelenmesi süregelmektedir. İnsanoğlunun en çok ilgisini çekenanatomik kısımlardan bir tanesi de şüphesiz ki beyindir. Günümüz çalışmalarıbeyin sinirsel faaliyetlerini izleyerek çeşitli alanlarda ilerlemelergöstermektedir. Beyin sinyallerinin izlenmesi için kullanılan en yaygınyöntemlerden biri EEG (Elektroansefalogram) olarak bilinmektedir. Günümüzde EEGtıbbi alanda tanı ve tedaviye yardımcı olarak kullanıldığı gibi aynı zamandadisiplinler arası olarak bilgisayar bilimlerinde BCI (Beyin Bilgisayar Arayüzü)sistemlerinde kullanılmaktadır. Beyin Bilgisayar Arayüzü (Brain ComputerInterface (BCI) ) sistemlerinin temelinde birey beyin sinyallerinin toplanarakbireyin dış dünyayla iletişime geçmesi için uygun şekilde kullanımı sözkonusudur. BCI sistemlerinin kullanım alanları; kısmi motor hareket kayıpları,ağır felçli bireyler, ağır konuşma güçlükleri vb. olarak sıralanabilir. Buçalışmada günümüzde BCI sistem tasarımlarında gelinen nokta hakkında derlemeyapılmıştır. Bu sayede BCI sistemi çalışmalarının durumu izlenebilecek ve BCIalanında gelişmelerin doğrultusu görülebilecektir.