由于语言是从语音碎片数据库中提取出来的,声音很难修饰,所以几乎不可能添加声调和重音等因素。
This is why robotic voices often sound monotonous and decidedly different from humans.
这就是为什么机器人语音听起来很生硬,明显和人声不同。
WaveNet however overcomes this problem, by using its neural network models to build an audio signal from the ground up, one sample at a time.
然而WaveNet克服了这个难关,利用神经元网络模型从头建立一个音频信号,每次生成一个样本。
During training the DeepMind team gave WaveNet real waveforms recorded from human speakers to learn from.
培训期间,DeepMind团队让WaveNet学习了一些真实记录的人类语音波形。
Using a type of AI called a neural network, the program then learns from these, much in the same way a human brain does.
通过一种叫做神经元网络的人工智能技术,这个系统可以像人类的大脑一样对这些波形进行学习。
The result was that the WaveNet learned the characteristics of different voices, could make non-speech sounds, such as breathing and mouth movements, and say the same thing in different voices.
所以WaveNet学习了不同声音的特点,可以发出非语言声音,比如呼吸声和嘴部活动的声音,并且可以用不同的声音说同样的内容。
【谷歌语言交互新突破 能更逼真模拟人声】相关文章:
最新
2020-09-15
2020-09-15
2020-09-15
2020-09-15
2020-09-15
2020-09-15