Document Type
Conference Proceeding
Publication Date
2017
Published In
Proceedings Of The 2017 Conference On Empirical Methods In Natural Language Processing
Abstract
Syllabification does not seem to improve word-level RNN language modeling quality when compared to character-based segmentation. However, our best syllable-aware language model, achieving performance comparable to the competitive character-aware model, has 18%-33% fewer parameters and is trained 1.2-2.2 times faster.
Published By
Association For Computational Linguistics
Conference Dates
September 7-11, 2017
Conference Location
Copenhagen, Denmark
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Z. Assylbekov, R. Takhanov, B. Myrzakhmetov, and Jonathan North Washington.
(2017).
"Syllable-Aware Neural Language Models: A Failure To Beat Character-Aware Ones".
Proceedings Of The 2017 Conference On Empirical Methods In Natural Language Processing.
1866-1872.
https://works.swarthmore.edu/fac-linguistics/227
Comments
This work is freely available under a Creative Commons license.