The biggest tech companies want you to know that they're taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn't spill over to the dark side.
But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what's in society's best interests.
"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives.
The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.
But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?