Yazar "Zerman, Emin" seçeneğine göre listele
Listeleniyor 1 - 6 / 6
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A Comparative Study on No-Reference Video Quality Assessment Metrics(Ieee, 2014) Zerman, Emin; Akar, Gozde Bozdagi; Konuk, Baris; Yilmaz, Gokce NurIn the last two decades the Internet technology has boosted and the connection speeds have been incrased from kilobits to hundred megabits scale. With the rising coverage of the Internet and the usage of mobile devices such as tablets and smart phones, the usage of social media and especially multimedia elements has been increased rapidly. This increment in streaming multimedia created a need for the assessment of the user experience on multimedia and especially video. Even though there are different Video Quality Assessment (VQA) methods for that purpose, most of them are Full-Reference (FR) or Reduced-Reference (RR). In today's world with many mobile devices, the application of these methods are not possible since they need the reference data. The No-Reference (NR) video metrics are much more suitable for the case. In this paper, the main objective is to evaluate a previously proposed NR VQA metric with a new dataset and to compare the results to other high-performance NR metrics such as G.1070 and G.1070E which do not utilize spatial and temporal characteristics of a given video sequence. Evaluation and comparison results show the accuracy and robustness of the proposed metric.Öğe Content Aware Audiovisual Quality Assessment(Ieee, 2015) Konuk, Baris; Zerman, Emin; Akar, Gozde Bozdaki; Nur, GokceIn this study, a novel, content aware audiovisual quality assessment (AVQA) method using a spatio-temporal characteristics based video classification method has been proposed and evaluated on AVQA database created by University of Plymouth. The proposed AVQA method is evaluated using subjective audio mean opinion score (MOS) and subjective video MOS. Results indicate that both classification method and the proposed content dependent AVQA method are quite satisfactoryÖğe A parametric video quality model based on source and network characteristics(Ieee, 2014) Zerman, Emin; Konuk, Baris; Nur, Gokce; Akar, Gozde BozdagiThe increasing demand for streaming video raises the need for flexible and easily implemented Video Quality Assessment (VQA) metrics. Although there are different VQA metrics, most of these are either Full-Reference (FR) or Reduced-Reference (RR). Both FR and RR metrics bring challenges for on-the-fly multimedia systems due to the necessity of additional network traffic for reference data. No-eference (NR) video metrics, on the other hand, as the name suggests, are much more flexible for user-end applications. This introduces a need for robust and efficient NR VQA metrics. In this paper, an NR VQA metric considering spatiotemporal information, bit rate, and packet loss rate characteristics of a video content is proposed. The proposed metric is evaluated on EPFL-PoliMI dataset, which includes different video content characteristics. The experimental results show that the proposed metric is a robust and accurate NR VQA metric towards diverse video content characteristics.Öğe A spatiotemporal no-reference video quality assessment model(Ieee, 2013) Konuk, Baris; Zerman, Emin; Nur, Gokce; Akar, Gozde BozdagiMany researchers have been developing objective video quality assessment methods due to increasing demand for perceived video quality measurement results by end users to speed-up advancements of multimedia services. However, most of these methods are either Full-Reference (FR) metrics, which require the original video or Reduced-Reference (RR) metrics, which need some features extracted from the original video. No-Reference (NR) metrics, on the other hand, do not require any information about the original video; hence, are much more suitable for applications like video streaming. This paper presents a novel, objective, NR video quality assessment algorithm. The proposed algorithm is based on utilization of spatial extent of video, temporal extent of video using motion vectors, bit rate, and packet loss ratio. Test results obtained using LIVE video quality database demonstrate the accuracy and robustness of the proposed metric.Öğe Spatiotemporal No-Reference Video Quality Assessment Model on Distortions Based on Encoding(Ieee, 2013) Zerman, Emin; Akar, Gozde Bozdagi; Konuk, Baris; Nur, GokceWith increasing demand on video applications, the video quality estimation became an important issue of today's technological world. There are different researchers and institutions working on video quality estimation. Most of the objective Video Quality Assessment (VQA) algorithms are Full-Reference (FR) metrics, and they require the original video. Metrics which require some features extracted from reference video are called as Reduced-Reference (RR). Additionally, No-Reference (NR) metrics do not require any information about the original video. Therefore, NR metrics are much suitable for online applications such as video streaming. A novel, objective, NR video quality assessment metric is proposed in this study. The proposed algorithm is based on utilization of spatial extent of video, temporal extent of video using motion vectors and bit rate. Test results obtained using the bit streams which have distortions based on encoding from LIVE video quality database. Results indicate the proposed metric is an accurate and robust algorithm.Öğe Video Content Analysis Method for Audiovisual Quality Assessment(Ieee, 2016) Konuk, Baris; Zerman, Emin; Nur, Gokce; Akar, Gozde BozdagiIn this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.