Person re-identification is one of the crucial tasks within smart surveillance systems. It aims to identify if a person has been seen by another non-overlapping camera over wide cameras network. It is a challenging task because of the large variations in the appearance of persons across different cameras. Most of the existing state-of-the-art person re-identification systems reidentify a person in a short-term situation when a person did not change their appearance. However, these systems fail when reidentify a person in a long-term situation because these systems depend only on appearance features and the person is expected to change his appearance. In this paper, we proposed a long-term person re-identification system based on deep learning by extracting discriminative human gait features to address the problem of appearance variations. In our proposed model, a combination of instance normalization and batch-normalization is adopted in ResNet layers which make our model invariant to appearance changes. The proposed model is evaluated on CASIA-B dataset which it is challenging dataset that has many different appearances for each identity. A comprehensive evaluation shows that our model outperforms the existing state-of-the-art systems, especially in rank-1 and rank-5. It achieved from 59.7% to 88.1% in rank-1 and from 80.05% to 96.25% in rank-5. Also, our model is evaluated on short-term person re-identification dataset, Market1501 and it achieved 90.1% in rank-1. |