Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
This paper models social learning in an environment in which agents randomly meet others over time. At each meeting, agents can share verifiable signals of the state of the world, which can be obtained from Nature or from previous meetings. We assume that meetings are ''short'', so there is a maximum number of signals that can be sent in any meeting. Payoffs are such that agents generally want to convince others of a certain thing (e.g., that the state is high) rather than the truth. With short meetings, a high-information environment can result in less credible communication, as agents can saturate their message with favorable signals no matter what the true state is. This logic has several implications. First, as meetings allow agents to accumulate signals, learning first accelerates and then decelerates. Thus, in an environment with frequent meetings, most meaningful communication happens in earlier periods. Second, generally only partial learning can be obtained. Partial reproducibility, in the form of agents only sometimes being able to pass on signals they receive from others, can help matters: it can enable higher learning in the limit, including full learning. However, the level of partial verifiability needed to obtain full learning may make learning very slow. There is, in general, a tradeoff between amount of learning achieved in the long run and the speed of learning.