As technology begins to play a more important role in the housing market, the U.S. Department of Housing and Urban Development (HUD) has proposed to address the role of algorithms used by industry businesses in an amendment to is Disparate Impact ruling under the Fair Housing Act.
The amendment is based on a 2015 Supreme Court ruling in the case of Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc. The Supreme Court had interpreted that under the ruling, the policy identified must be an “artificial, arbitrary, and unnecessary barrier” to fair housing.
HUD’s proposed rule, therefore, aims to provide a framework for establishing legal liability for facially neutral practices that have unintended discriminatory effects on classes of persons protected under the Fair Housing Act. The rule has no impact on determinations of intentional discrimination, the agency clarified in a statement.
In its 2015 decision, the Supreme Court upheld the use of a ‘disparate impact’ theory to establish liability under the Fair Housing Act for business policies and local ordinances even if the policy or ordinance is neutral in intent and application if it disproportionately affects a protected class without legally sufficient justification.
“The goal of this proposed amendment is to bring more clarity to the disparate impact rule,” Paul Compton, General Counsel of HUD, told reporters during a call on Friday. He said that the amendment has “nothing to do with the intentional discrimination” rules under the Act.
Compton said that HUD was interested in hearing more about the impact of the proposed amendment on the role of new technology such as algorithms that used artificial intelligence to assess factors such as risk or creditworthiness.
In its proposal, HUD said that many commentators wanted these algorithms to be provided with a “safe harbor,” under the amended rule.
HUD explained that while the disparate impact rule provided an important tool to root out factors that “may cause these models to produce discriminatory outputs, these models can also be an invaluable tool in extending access to credit and other services to otherwise underserved communities.”
The agency, therefore, proposed under the amended rule parties using these algorithms could be provided with methods of defending their models in cases where they could prove that these algorithms achieved “legitimate objectives.”
“They are intended to ensure that disparate impact liability is “limited so employers and other regulated entities are able to make the practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system,”” HUD said in its ruling.
Additionally, the proposed amendment clarified that the section intended to merely recognize that additional guidance was necessary to respond to the complexity of “disparate impact cases challenging these models.”
Click here to read HUD’s detailed proposal and call for comments.