Current vocal output generation systems lack portability and variability. Text-to-speech synthesis is not well accepted by users for its lack of conviviality. We propose alternative approaches to generating spoken utterances that fulfill the quality requirements of a speech output system. Our framework for language generation relies on conceptual segments, we controlled their acoustic realization by means of four basic methods based on four levels of representation: signal concatenation, text-to-speech, mimicking synthesis and phonological prosodic command driving the speech synthesizer. Examples produced allow a step towards flexible and contextually appropriate generation of spoken utterances.
展开▼