The idea of viral marketing is that of exploiting a pre-existing social network in order to increase brand awareness or product sales through self-replicating viral processes. The goal is to “target” the most influential users in the network in order to maximize the spread of the marketing message with a small initial marketing cost.
An essential computational task towards setting up an effective viral marketing campaign is to estimate social influence. Namely, if Alice is a follower (or a friend) of Bob in the underlying social network, how probable it is that an action that Bob makes and reports will influence Alice into doing the same. Such estimates are usually done by analyzing user activity data. The data analysis and sharing that is needed to estimate social influence raises important privacy issues that may jeopardize the legal, ethical and societal acceptability of such practice, and in turn, the concrete applicability of viral marketing in the real world. Tassa and Bonchi (EDBT 2014) devised secure multi-party protocols that allow a group of service providers and a social networking platform (so called “the host”) to jointly compute social influence in a privacy preserving manner. They assumed that the players are semi-honest, i.e., that they follow the protocol correctly, but at the same time they examine their view of the protocol in order to extract information on inputs provided by their peers.
In a recent paper by Rica Gonen and Tamir Tassa (which was presented in the FASES workshop in the ARES 2016 conference), we discuss the case of selfish rational players. Such players participate in the protocol and follow it correctly only if it is in their best interest and if it maximizes their utility. We enhance the protocol of Tassa and Bonchi by incorporating into it mechanisms that incentivize the players to participate in the protocol truthfully. In particular, while in the basic protocol (EDBT 2014) all the output went to the host, while the service providers were expected to contribute their private data as inputs to the computation without getting any reward in return, the enhanced protocol issues parts of the computed output to them too.
In order to modify the basic protocol into one that incentivizes the host to share with the service providers their requested parts in the output, the protocol is executed several times (so called “rounds”) with an additional player T that acts as a coordinator. In each round, T tosses a coin that determines whether that round would be fake or true.
In fake rounds, all computed results are meaningless. Those rounds are performed in order “to catch” cheating participants and apply a punishment strategy; if a cheat is detected, the protocol aborts. Only in the true and last round, the results are correct. By formalizing utility functions for each player (“what do I gain from a truthful behavior?” / “what do I gain from cheating?” / “what do I lose if the protocol aborts?”), and by selecting accordingly the coin probability, it is possible to incentivize rational players to participate and act truthfully.
About the author(s)
Rica Goen | The Open University of Israel