\'A Model of Behavioral Manipulation\' by Daron Acemoglu is a research paper that presents a model of online behavioral manipulation driven by advances in artificial intelligence (AI). The paper discusses how platforms dynamically offer products to users, and user learning depends on a product's \'glossiness,\' which captures attributes that make products attractive. Acemoglu, D. (2021). A Model of Behavioral Manipulation. National Bureau of Economic Research. https://doi.org/10.3386/w31872
E (cid:2)postAI({i, i}n E (cid:2)U postAI({i, i}n E (cid:2)W postAI({i, i}n i=1)(cid:3) > E (cid:2)preAI({i}n i=1)(cid:3) > E (cid:2)U preAI({i}n i=1)(cid:3) > E (cid:2)W preAI({i}n i=1)(cid:3) i=1)(cid:3) i=1)(cid:3) . 16 The informational advantage of the platform always increases its own prots, as it enables the platform to modify the users behavior. Theorem 3 establishes that this informational advantage also increases the expected users utility and welfare for large enough . This theorem conrms what might be viewed as the conventional wisdom in the literature: more data enables better allocation of products and therefore benets the users and society as a whole. This theorem follows from the fact that for large enough , the helpfulness effect, characterized in Proposition 2, dominates the manipulation effect of Proposition 1. In the next subsection, we will see that this intuition does not hold in general and the platforms information advantage may harm users.
id: 5b8a9fd1bd9ad26c64b6daf78ec8f669 - page: 17
6.2 When Behavioral Manipulation Harms Users Here, we show that in the post-AI environment, the users utility decreases because of behavioral manipulation since the manipulation effect dominates the helpfulness effect. Theorem 4. Suppose the initial beliefs {i}n such that for l we have i=1 are i.i.d. and uniform over [0, 1]. For any r, , , there exists l E (cid:2)postAI({i, i}n E (cid:2)U postAI({i, i}n E (cid:2)W postAI({i, i}n i=1)(cid:3) > E (cid:2)preAI({i}n i=1)(cid:3) < E (cid:2)U preAI({i}n i=1)(cid:3) < E (cid:2)W preAI({i}n i=1)(cid:3) i=1)(cid:3) i=1)(cid:3) . In this case with low , the platforms informational advantage enables it to engage in behavioral manipulation: the user is pushed towards products with = 1 (or more accurately, towards products that have initial glossiness state i,0 = 1). Because these products do not generate bad news in the short run, the users belief will become more positive for a while, and this will enable the platform to charge
id: f460b31c7ac7642b09b13c63a047cf4a - page: 18
However, because glossy products are low quality, this behavioral manipulation is bad for user utility and utilitarian welfare. That is small here is important. As we saw, when is large, the platform expects the glossiness of the production to wear off quickly, and thus it is not worthwhile to push the user towards glossy products. But from a welfare point of view, it is more costly to have users consume glossy products when is small, because they will not discover for quite a while that the product is actually not high-quality. It is this feature of behavioral manipulation that reduces user utility and utilitarian welfare.
id: 2175fbdfabff888dbfff7e07d9807b77 - page: 18
6.3 Big Data Double Whammy: More Products Negatively Impact User Welfare The availability of big data provides platforms with valuable insights into predictable patterns of user behavior, which can be leveraged for behavioral manipulation, as we have established thus far. Moreover, the same advances in AI also enable digital platforms to expand the range of products and services they offer. Next, we demonstrate that this combination of greater choice and more platform information 17 may be particularly pernicious as the number of products increases, the potential for behavioral manipulation increase as well. This result highlights that multiple aspects of the new capabilities of digital platforms closely interact in affecting user welfare. Theorem 5. Suppose the initial beliefs {i}n+1 exist and l such that for , l, and n (cid:100)1 i=1 are i.i.d. and uniform over [ 1 2 , 1 2 + ]. For any r, , , there log(2)/(1) (cid:101) in the post-AI environment we have:
id: 1cb426f04b4a958931b02d420c918688 - page: 18