Yazar "Nur, Gokce" seçeneğine göre listele
Listeleniyor 1 - 8 / 8
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe An abstraction based reduced reference depth perception metric for 3D video(Ieee, 2012) Nur, Gokce; Akar, Gozde BozdagiIn order to speed up the wide-spread proliferation of the 3D video technologies (e.g., coding, transmission, display, etc), the effect of these technologies on 3D perception should be efficiently and reliably investigated. Using Full-Reference (FR) objective metrics for this investigation is not practical especially for "on the fly" 3D perception evaluation. Thus, a Reduced Reference (RR) metric is proposed to predict the depth perception of 3D video in this paper. The color-plus-depth 3D video representation is exploited for the proposed metric. Since the significant depth levels of the depth map sequences have great influence on the depth perception of users, they are considered as side information in the proposed RR metric. To determine the significant depth levels, the depth map sequences are abstracted using bilateral filter. Video Quality Metric (VQM) is utilized to predict the depth perception ensured by the significant depth levels due to its well correlation with the Human Visual System (HVS). The performance assessment results present that the proposed RR metric can be utilized in place of a FR metric to reliably measure the depth perception of 3D video with a low overhead.Öğe Advanced Adaptation Techniques for Improved Video Perception(Ieee-Inst Electrical Electronics Engineers Inc, 2012) Nur, Gokce; Arachchi, Hemantha Kodikara; Dogan, Safak; Kondoz, Ahmet M.Three different advanced adaptation techniques for improving the video perception of users are proposed in this paper. The proposed techniques exploit different adaptation decision-taking and adaptation approaches to adapt particular core parameters while considering diverse contextual information and constraints to achieve improved video perception of users. The first proposed technique employs a utility-based adaptation approach to perform adaptation operations on spatial resolution, frame rate, and quality scalability parameters according to the content-related contextual information (i.e., motion activity and structural feature) while fulfilling network bandwidth and terminal display size constraints. Using this technique, video contents can be adapted with the scalability parameters best fitting users' and contextual constraints' needs to achieve improved video perception. The second technique relies on prioritizing key frame, non-key frame, and temporal layer parameter-related network abstraction layer units to adapt video contents to satisfy network bandwidth constraint. The rate-distortion performances of adapted video contents can be improved by utilizing this technique in adaptation operations both in terms of bit rate of adapted video contents and video perception of users. The third technique is based on adapting the bit rate of 3-D video contents according to the changes in ambient illumination of the viewing environment. The adaptation results evaluated by either subjective or objective quality assessment techniques prove that all of the proposed techniques are efficient to improve the video perception of users.Öğe Content Aware Audiovisual Quality Assessment(Ieee, 2015) Konuk, Baris; Zerman, Emin; Akar, Gozde Bozdaki; Nur, GokceIn this study, a novel, content aware audiovisual quality assessment (AVQA) method using a spatio-temporal characteristics based video classification method has been proposed and evaluated on AVQA database created by University of Plymouth. The proposed AVQA method is evaluated using subjective audio mean opinion score (MOS) and subjective video MOS. Results indicate that both classification method and the proposed content dependent AVQA method are quite satisfactoryÖğe A parametric video quality model based on source and network characteristics(Ieee, 2014) Zerman, Emin; Konuk, Baris; Nur, Gokce; Akar, Gozde BozdagiThe increasing demand for streaming video raises the need for flexible and easily implemented Video Quality Assessment (VQA) metrics. Although there are different VQA metrics, most of these are either Full-Reference (FR) or Reduced-Reference (RR). Both FR and RR metrics bring challenges for on-the-fly multimedia systems due to the necessity of additional network traffic for reference data. No-eference (NR) video metrics, on the other hand, as the name suggests, are much more flexible for user-end applications. This introduces a need for robust and efficient NR VQA metrics. In this paper, an NR VQA metric considering spatiotemporal information, bit rate, and packet loss rate characteristics of a video content is proposed. The proposed metric is evaluated on EPFL-PoliMI dataset, which includes different video content characteristics. The experimental results show that the proposed metric is a robust and accurate NR VQA metric towards diverse video content characteristics.Öğe Prediction of 3D Video Experience from Video Quality and Depth Perception Considering Ambient Illumination Context(Ieee, 2012) Nur, Firat Can; Nur, Gokce3 Dimensional (3D) video provides a realistic viewing experience to users due to the addition of depth sensation to 2-Dimensional (2D) video. Nevertheless, the advancement of 3D video technologies (e.g., transmission, coding, etc) is in its early stage. To support the proliferation of 3D video technologies for the "Future Internet" users, prediction of 3D video experience, which includes both video quality and depth perception, can be used as feedback information to modify the parameters of these technologies. Although, a few research studies are carried out for measuring the video quality and depth perception individually, same effort has not been performed for the 3D video experience. The reason behind this is that the 3D experience is influenced by several contextual factors related to both the video quality and depth perception (e.g., ambient illumination condition, content characteristics, etc). Therefore, in this paper, subjective experiments are conducted to monitor the effect of the ambient illumination on the video quality, depth perception, and 3D video experience. Using the results of these experiments, a generic mathematical function is devised to predict the 3D viewing experience of a user towards at a specific ambient illumination condition from the video quality and depth perception. The knowledge gained through this research study can be exploited in coordination with the 3D video technologies to improve the 3D viewing experience of the Future Internet users.Öğe A spatiotemporal no-reference video quality assessment model(Ieee, 2013) Konuk, Baris; Zerman, Emin; Nur, Gokce; Akar, Gozde BozdagiMany researchers have been developing objective video quality assessment methods due to increasing demand for perceived video quality measurement results by end users to speed-up advancements of multimedia services. However, most of these methods are either Full-Reference (FR) metrics, which require the original video or Reduced-Reference (RR) metrics, which need some features extracted from the original video. No-Reference (NR) metrics, on the other hand, do not require any information about the original video; hence, are much more suitable for applications like video streaming. This paper presents a novel, objective, NR video quality assessment algorithm. The proposed algorithm is based on utilization of spatial extent of video, temporal extent of video using motion vectors, bit rate, and packet loss ratio. Test results obtained using LIVE video quality database demonstrate the accuracy and robustness of the proposed metric.Öğe Spatiotemporal No-Reference Video Quality Assessment Model on Distortions Based on Encoding(Ieee, 2013) Zerman, Emin; Akar, Gozde Bozdagi; Konuk, Baris; Nur, GokceWith increasing demand on video applications, the video quality estimation became an important issue of today's technological world. There are different researchers and institutions working on video quality estimation. Most of the objective Video Quality Assessment (VQA) algorithms are Full-Reference (FR) metrics, and they require the original video. Metrics which require some features extracted from reference video are called as Reduced-Reference (RR). Additionally, No-Reference (NR) metrics do not require any information about the original video. Therefore, NR metrics are much suitable for online applications such as video streaming. A novel, objective, NR video quality assessment metric is proposed in this study. The proposed algorithm is based on utilization of spatial extent of video, temporal extent of video using motion vectors and bit rate. Test results obtained using the bit streams which have distortions based on encoding from LIVE video quality database. Results indicate the proposed metric is an accurate and robust algorithm.Öğe Video Content Analysis Method for Audiovisual Quality Assessment(Ieee, 2016) Konuk, Baris; Zerman, Emin; Nur, Gokce; Akar, Gozde BozdagiIn this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.