The National Institute of Standards and Technology has issued a proposal to identify and deal with prejudice in artificial intelligence.
"The proliferation of modeling and predictive methods based on computer-driven and machine-learning techniques has helped to uncover various social prejudices embedded in real systems, and there is growing evidence that the public is concerned about the risks of AI to society," the proposal said.
harmful bias in both distributed and production techniques.
"Such a mechanism will require functions such as a common vocabulary, clear and specific principles and governance methods and strategies for insurance."
NIST invites public comments on the proposal.
The three-step process recommended by the Gaithersburg, Maryland-based agency in its proposal includes pre-design, where the technology of technology is designed, defined and developed; design and development, where the technology is constructed; and distribution, where the technology is used by or applied to, different individuals or groups.
A NIST study published in July 2020 showed that the best of 89 tested commercial face recognition algorithms had an error rate of between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask. Catalog