Anthropic, an artificial intelligence startup backed by Google owner Alphabet Inc, on Tuesday disclosed the set of written moral values that it used to train and make safe Claude, its rival to the technology behind OpenAI's ChatGPT.
Anthropic was founded by former executives from Microsoft Corp-backed OpenAI to focus on creating safe AI systems that will not, for example, tell users how to build a weapon or use racially biased language. But those systems have a hard time anticipating everything people might ask, so they tend to avoid some potentially contentious topics like politics and race altogether, making them less useful.