We present BosphorusSign22k, a new benchmark dataset for vision-based user-independent isolated Sign Language Recognition (SLR). The dataset is collected with the purpose of helping both linguistic and computer science communities. It contains trimmed/isolated videos of Turkish Sign Language glosses from three different domains: Health, Finance and commonly used everyday signs. Videos in this dataset were performed by 6 native signers with more than 4 repetitions.
Recordings in the dataset were captured using Microsoft Kinect v2 with 1080p (1920x1080 pixels) video resolution at 30 frames per second. All of the videos share the same recording setup where signers stood in front of a Chroma-Key background which is 1.5 meter far away from the camera. RGB video, depth map and skeleton information of the signer are provided for each sign gloss video in the dataset. Additionally, we provide OpenPose skeleton joints which include facial landmarks and hand joint positions along with body pose information.
BosphorusSign22k dataset has a vocabulary of 744 sign glosses; 428 in Health, 163 in Finance as well as another 174 commonly used sign glosses. Specifications and modalities of the BosphorusSign22k dataset are;
Feature | Value |
Number of Sign Glosses | 744 |
Number of Signers | 6 |
Number of Videos | 22,542 |
Total Duration | ~19 hours (~2M frames) |
RGB Resolution | 1920 x 1080 pixels |
Depth Resolution | 512 x 424 pixels |
Frame Rate | 30 frames/second |
Body Pose Information (Kinect v2) | 25 x 3D Keypoints |
Body Pose Information (OpenPose) | 25 x 2D Keypoints |
Facial Landmarks (OpenPose) | 70 x 2D Keypoints |
2 x Hand Pose Information (OpenPose) | 2 x 21 x 2D Keypoints |
The dataset is publicly available for research purposes upon submitting an EULA to the authors. For further information, please contact the authors below.
Published signer independent isolated SLR results on BosphorusSign22k dataset (in top-1 and top-5 Accuracy[%], higher the better):
Author | Method | top-1 Acc. (%) | top-5 Acc. (%) |
Kindiroglu et al. [1] (only General subset) | Temporal Accumulative Features | 81.37 | 97.47 |
Ozdemir et al. [2] (Baseline) | 3D ResNets (MC3) | 78.85 | 94.76 |
Ozdemir et al. [2] (Baseline) | IDT (hog + hof + mbh) | 88.53 | - |
Gokce et al. [3] | Score-level Multi Cue Fusion (3D ResNets) | 94.94 | 99.76 |
[1] Kındıroğlu, Ahmet Alp, Oğulcan Özdemir, and Lale Akarun. "Temporal accumulative features for sign language recognition." 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019.
[2] Özdemir, Oğulcan, Ahmet Alp Kındıroğlu, Necati Cihan Camgöz, and Lale Akarun. "BosphorusSign22k Sign Language Recognition Dataset." 9th Workshop on the Representation & Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, Language Resources and Evaluation Conference 2020.
[3] Gökçe, Ç., Özdemir, O., Kındıroğlu, A. A., & Akarun, L. (2020). Score-level Multi Cue Fusion for Sign Language Recognition. arXiv preprint arXiv:2009.14139.
Please cite the following papers if you have used the BosphorusSign22k dataset in your research.
@inproceedings{ozdemir2020bosphorussign22k,
title = {{BosphorusSign22k Sign Language Recognition Dataset}},
author = {{\"O}zdemir, O{\u{g}}ulcan and K{\i}nd{\i}ro{\u{g}}lu, Ahmet Alp and Cihan Camgoz, Necati and Akarun, Lale},
booktitle = {Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives},
year = {2020},
}
@inproceedings{camgoz2016bosphorussign,
title = {{BosphorusSign: a Turkish sign language recognition corpus in health and finance domains}},
author = {Camg{\"o}z, Necati Cihan and K{\i}nd{\i}ro{\u{g}}lu, Ahmet Alp and Karab{\"u}kl{\"u}, Serpil and Kelepir, Meltem and {\"O}zsoy, Ay{\c{s}}e Sumru and Akarun, Lale},
booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages = {1383--1388},
year = {2016}
}
Links to the papers: