Human Language Technology

Resources

Code

  1. Unified framework for speaker and utterance verification: [https://github.com/sn1ff1918/SUV]
  2. Multi-level Adaptive Speech Activity Detector: [https://github.com/bidishasharma/MultiSAD/]
  3. PESnQ: Perceptual Evaluation of Singing Quality: [https://github.com/chitralekha18/PESnQ_APSIPA2017] [Paper]
  4. Automatic Sung-Lyrics Data Annotation: [https://github.com/chitralekha18/AutomaticSungLyricsAnnotation_ISMIR2018.git] [Paper]
  5. NUS AutoLyrixAlign: [https://github.com/chitralekha18/AutoLyrixAlign.git]

Data Set

Demo

  1. Robust Sound Recognition: A Neuromorphic Approach: [https://youtu.be/MIVvNb0sWOM]
  2. Speak-to-Sing: [https://speak-to-sing.hltnus.org/] [Poster]
  3. MuSigPro: Automatic Leaderboard Generation of Singers using Reference-Independent Singing Quality Evaluation Methods: [https://youtu.be/IAlsECqd9IE]
  4. AutoLyrixAlign: Automatic lyrics-to-audio alignment system for polyphonic music audio