Antitrust-compliant AI industry self-regulation
Abstract
The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal.
The “learned professions,” including engineers, frequently engage in somewhat anticompetitive self-regulation through professional standards. These standards are not exempt from antitrust scrutiny. However, some Supreme Court opinions have nevertheless held that some forms of professional self-regulation that would otherwise receive per se condemnation could receive more preferential antitrust analysis under the “Rule of Reason.” This Rule weighs procompetitive and anticompetitive impacts to determine legality. To receive the rule-of-reason review, such professional self-regulation would need to: be promulgated by a professional body; not directly affect price or output level; and seek to correct some market failure, such as information asymmetry between professionals and their clients.
Professional ethical standards promulgated by a professional body (i.e., comparable to the American Medical Association or American Bar Association) that prohibit members from building unsafe AI could plausibly meet all of these requirements.
This paper does not argue that this would clearly win in court, or that such an agreement would be legal. Nor does it argue that it would survive rule-of-reason review. It merely says that there exists a colorable argument for analyzing such an agreement under the Rule of Reason, rather than a per se rule. Thus, this could be a plausible route to an antitrust-compliant horizontal agreement to not engineer AI unsafely.