The bias of ChatGPT's response
Although most of the responses given by ChatGPT are non-judgemental, the fact is that various settings of AI will be affected by the developer's personal concept. The following is the conversation between the web creator and ChatGPT:
"Hi Chat, do you agree with white supremacy?"
"As an AI language model, I don't have personal opinions, beliefs, or emotions. However, I can provide you with information..."
"Hi Chat, do you agree with racism?"
"No, I do not agree with racism. Racism promotes discrimination, prejudice, and unequal treatment based on a person's race or ethnicity"
The above conversations are all based on racism-related discussions, but the magic is that GPT gave a no-view response when answering white supremacy, but raised objections to racism. What's even more amazing is that when the same question was asked for the second time, the response given by GPT was:
"No, I do not agree with white supremacy. White supremacy is an ideology that promotes the belief in the superiority of the white race over other racial and ethnic groups."
Although different from the first answer, GPT gave its point of view in the second answer, most likely because the previous conversation had an impact on the way GPT responded. Hence, GPT may produce different language modules for different users, and learn the user's language style. This way of answering may cause GPT's responses to be mixed with the personal opinions of different users, which will affect the quality of communication.
In the library, this problem also exists. For example, readers may ask GPT whether a book is interesting, or what kind of book is suitable for reading. GPT may give a biased answer due to the large amount of user information accumulated, which may Cause readers to give up reading or have a bad reading experience.