# PoliciesMultiPlayers.Scenario1 module¶

Scenario1: make a set of M experts with the following behavior, for K = 2 arms: at every round, one of them is chosen uniformly to predict arm 0, and the rest predict 1.

• Reference: Beygelzimer, A., Langford, J., Li, L., Reyzin, L., & Schapire, R. E. (2011, April). Contextual Bandit Algorithms with Supervised Learning Guarantees. In AISTATS (pp. 19-26).

class PoliciesMultiPlayers.Scenario1.OneScenario1(mother, playerId)[source]

OneScenario1: at every round, one of them is chosen uniformly to predict arm 0, and the rest predict 1.

__init__(mother, playerId)[source]

Initialize self. See help(type(self)) for accurate signature.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

__module__ = 'PoliciesMultiPlayers.Scenario1'
class PoliciesMultiPlayers.Scenario1.Scenario1(nbPlayers, nbArms, lower=0.0, amplitude=1.0)[source]

Scenario1: make a set of M experts with the following behavior, for K = 2 arms: at every round, one of them is chosen uniformly to predict arm 0, and the rest predict 1.

• Reference: Beygelzimer, A., Langford, J., Li, L., Reyzin, L., & Schapire, R. E. (2011, April). Contextual Bandit Algorithms with Supervised Learning Guarantees. In AISTATS (pp. 19-26).

__init__(nbPlayers, nbArms, lower=0.0, amplitude=1.0)[source]
• nbPlayers: number of players to create (in self._players).

Examples:

>>> s = Scenario1(10)

• To get a list of usable players, use s.children.

• Warning: s._players is for internal use

__str__()[source]

Return str(self).

_startGame_one(playerId)[source]

Forward the call to self._players[playerId].

_getReward_one(playerId, arm, reward)[source]

Forward the call to self._players[playerId].

_choice_one(playerId)[source]

Forward the call to self._players[playerId].

__module__ = 'PoliciesMultiPlayers.Scenario1'