Ethics are too subjective to guide the use of AI, argue some legal scholars.
Othe past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.
“Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang,” Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.
Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?