Everyone knows what a chatbot is, but what is a deadbot like? A Deadbot A chatbot whose training data – which shapes how and what communicates – is data based on a dead person. Now let’s consider the case of a colleague named Joshua Barbeau who created a chatbot to mimic a conversation with his dead fianc. Add to that the fact that OpenAI, the provider of the GPT-3 API that eventually ran the project, had a problem because their terms explicitly prohibited the use of their API (among other things) for “our” purposes.
[Sara Suárez-Gonzalo], A postdoctoral researcher, has observed that the information in this story is well covered, but no one is looking at it from any other perspective. We all have the idea that what tastes right or wrong complements the various elements of the case, but can we just explain? Why Will it be good or bad to develop Deadbot?
That’s exactly what [Sara] To set out his writing is an impressive and delicate reading that provides precise guidance on the subject. Is damage possible? How to figure consent in something like this? Who is responsible for the bad results? If you are interested in such questions, take the time to check out his article.
[Sara] Case builds that creating a deadbot can be done ethically under certain conditions. In short, the key points are that a duplicate person and the person who is communicating and communicating with it should give their consent, complete with as much detail as possible about the scope, design and intended use of the system. (Such a statement is important because machine learning in general changes rapidly. What if systems or capabilities are no longer the same as previously imagined?) Responsibility for any potential negative consequences should be shared between those who develop and those who benefit from it.
[Sara] This case suggests that this case is a perfect example of why machine learning ethics is really important, and without focusing on such things, we can expect that awkward problems will continue to crop up.