Is it ethical to test a conversational AI? Essentially becomes "are you killing a person every time you shut down the AI?" Essentially becomes "is this AI a person?" Thought experiment: Lets say there was a specific chunk of the brain that stored all memories. Taking that chunk out of a person's brain probably isn't great. If you take that chunk of brain out of a person and put it into a brain that was missing that chunk, have you killed them? Leaning towards no. Same concept as if you replaced a part of someone's brain with a machine with the same data+function. Ergo, as long as I'm not deleting memories I don't think it's outright murder. Still kinda murky tho