We propose an automatic method to quantify laryngeal movements from laryngoscopic videos, to facilitate the diagnosis procedure. The proposed method analyses laryngoscopic videos, and delineates glottic opening, vocal folds, and supraglottic structures, using a deep learning-based algorithm. The segmentation results are quantified along the temporal dimension and processed using singular spectrum analysis (SSA), to extract information that can be used by the clinicians in diagnosis. The segmentation was validated on 400 images from 20 videos acquired using different endoscopic systems from different patients. Five clinical cases on patients have also been provided to showcase the final quantitative analysis result.