The FTC Is Looking For Truth, Fairness, And Equity In The Use of Artificial Intelligence
Written by: Richard Sheinis, Esq.
On April 19, 2021 the FTC issued what might be called guidance, but is more of a warning, regarding the use of artificial intelligence. The FTC cautions against using AI in a way that produces discriminatory outcomes.
The FTC states that in order to avoid bias and prejudice, the data sets upon which AI is built must not exclude information from particular populations. Despite good intentions, the use of flawed data could result in discrimination on the basis of race, gender, or other protected classes.
The FTC states it will use Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act to bring enforcement actions against companies that discriminate through the use of AI. The FTC ends with a cautionary statement that if you don’t hold yourself accountable, the FTC may do it for you.
While the FTC is long on scary warnings, it is short on guidance. This is in stark contrast to the “Ethics Guidelines For Trustworthy AI” recently published by the European Commission. The EC’s guidance provides a high level view of using AI in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness and explicability. Unfortunately, the FTC is still relying on decades old laws written long before the term “artificial intelligence” was even a thought. We can expect more of the same until the Federal Government decides to address privacy and data security through modern comprehensive legislation.
Leave a comment
You must be logged in to post a comment.