Sign language recognition system (SLRS) is one of the application areas of human computer interaction (HCI) where signs of hearing impaired people are converted to text or voice of the oral language. This paper presents an automatic visual SLRS that translates isolated Arabic words signs into text. The proposed system has four main stages: hand segmentation, tracking, feature extraction and classification. A dynamic skin detector based on the face color tone is used for hand segmentation. Then, a proposed skin-blob tracking technique is used to identify and track the hands. A dataset of 30 isolated words that used in the daily school life of the hearing impaired children was developed for evaluating the proposed system, taking into consideration that 83% of the words have different occlusion states. Experimental results indicate that the proposed system has a recognition rate of 97% in signer-independent mode. In addition to, the proposed occlusion resolving technique can outperform other methods by accurately specify the position of the hands and the head with an improvement of 2.57% at τ = 5 that aid in differentiating between similar gestures |