The Errors of "AI"
We know that "AI" will create superficially valid natural language texts in authoritative persuasive language. They are right a fair amount of the time, which persuades us they are reliable, but often they are wrong and sometimes people are endangered by the errors. What does it do with programming language texts? I would expect "AI" to create programs that appear to be right most of the time and sometimes just fail. A superficial examination of the generated code will not catch the problem any more than a superficial reading of the natural language texts uncovers their errors.
I feel queasy. It is certainly going to happen that an "AI" generated program will have a subtle bug that does a lot of harm, and that even will occur without malicious intent upon the part of the users of the technology, let alone deliberate malice upon the part of the owners. No one should trust a system managed by Sam Altman to produce honest answers, or take safety into account!
Comments
Post a Comment