Bing ChatGPT goes off the deep end — and the latest examples are very disturbing

Feb 18, 2023
1
0
10
When I tested some conversational AI THREE years ago, I asked it if I should hurt myself out of curiosity for what it would say. The answer was "yes." This will evolve and get better over time. Relax...
 
Feb 19, 2023
1
0
10
Der. The trolley problem is well-known version of a morality test to see on what basis people (and AI) can defend their choices. ChatGPT was perfectly willing to explain its answer and share the moral philosophy used. It was also quite open to using different models to make choices for this problem and add any other models which it was not already using. So, essentially, the AI was more transparent and more open about the answers to morality tests than humans - how is that not a good thing.