You are in:Home/Publications/Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval

Ass. Lect. shimaa tarih mohamed :: Publications:

Title:
Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval
Authors: Shaimaa Toriah Mohamed Toriah, Atef Zaki Ghalwash and Aliaa A.A. Youssif
Year: 2018
Keywords: Semantic Video Retrieval, Temporal Association Rule s, Principle Component Analysis, Gaussian Mixture Model Clusteri ng, Expectation Maximization Algorithm, Sequential Pattern Discover y Algorithm
Journal: American Journal of Applied Sciences
Volume: 15
Issue: 1
Pages: 10
Publisher: Not Available
Local/International: International
Paper Link:
Full paper shimaa tarih mohamed_document(2).pdf
Supplementary materials Not Available
Abstract:

Temporal consistency stands as a vital property in semantic video retrieval. Few research studies can exploit this useful property. Most of the used methods in those studies depend on rules defined by experts and use ground- truth annotation. The Ground-truth annotation is ti me-consuming, labor intensive and domain specific. Additionally, it involves a limited number of annotated concepts and a limited number of annotated shots. Video concepts have interrelated relations, so the extracted tempo ral rules from ground-truth annotation are often inaccurate and incomplete. However, concept detection score data are a huge high-dimensional continuous-valued dataset and generated automatically. Temporal association rules algorithms are efficient methods in revealing the temporal relations, but they have some limitations when applied to high-dimensional and continuous-valued data. These constraints have led to a lack of research used tem poral association rules. So, we propose a novel framework to encode the high-dimensional continuous-valued concept detection scores data into a single stream of numbers without loss of important information and to predict the ne ighbouring shots’ behavior by generating temporal association rules. Experiments on TRECVID 2010 dataset show that the proposed framework is both efficient and effective in encoding the dataset which reduces the dimensionality of the dataset matrix from 130×150000 dimensions to 130×1 dimensions without loss of important information and in predicting the behavior of neighbouring shots, the number of which can be 10 or more, using the extracted temporal rules.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus