We are committed to ensuring that our groundbreaking technology is only used for ethical projects - and doesn't fall in to the wrong hands
Some ethical questions about synthetic speech are easy, but others are hard. We don't just rely on our gut to tell us what is right. This set of principles guides our decision making.
Dubdub.ai does not allow any deceptive uses of our technology.
Dubdub.ai does not use voices without permission when this could impact the privacy of the subject or their ability to make a living. In practice, this means we will never use the voice of a private person or an actor without permission. In a handful of cases, we have used the voices of historical figures such as Jawaharlal Nehru without permission but non-deceptively to show what the technology can do. While we will listen to requests, we are generally not open to doing new projects of this nature.
Ethical voice cloning principles
We know voice cloning technology can be dangerous.
While we're using it to revolutionize movies, video games and other creative projects, it can also fool people into thinking someone said something they didn't. That's deception and it’s just plain wrong. It's wrong to defraud people. It's wrong to create fake news.
It's all well and good to have strong principles, but how can we ensure that they are not violated?
Dubdub.ai does not provide any public API for creating new voices.
Dubdub.ai works directly with clients we trust.
Dubdub.ai requires written consent from voice owners.
Dubdub.ai only approves projects that meet our strict standards.
We are just getting started in this area, but our goals are clear:
Educate the public about the capabilities of synthetic speech technology.
Develop automatic detection algorithms that can detect synthetic speech even if it has not been watermarked by us.
Work with gatekeepers of content such as Facebook and YouTube to limit the harm of voice cloning by bad actors through prominent labeling of all synthetic content and banning of particularly unethical content.