Policies.RAWUCB module

author: Julien Seznec

Rotting Adaptive Window Upper Confidence Bounds for rotting bandits.

Reference : [Seznec et al., 2019b] A single algorithm for both rested and restless rotting bandits (WIP) Julien Seznec, Pierre Ménard, Alessandro Lazaric, Michal Valko

class Policies.RAWUCB.EFF_RAWUCB(nbArms, alpha=0.06, subgaussian=1, m=None, delta=None)[source]

Bases: Policies.FEWA.EFF_FEWA

Efficient Rotting Adaptive Window Upper Confidence Bound (RAW-UCB) [Seznec et al., 2019b, WIP] Efficient trick described in [Seznec et al., 2019a, https://arxiv.org/abs/1811.11043] (m=2) and [Seznec et al., 2019b, WIP] (m<=2) We use the confidence level :math:`delta_t =

rac{1}{t^lpha}`.

choice()[source]

Not defined.

_compute_ucb()[source]
_append_thresholds(w)[source]
__str__()[source]

-> str

__module__ = 'Policies.RAWUCB'
class Policies.RAWUCB.EFF_RAWklUCB(nbArms, subgaussian=1, alpha=1, klucb=CPUDispatcher(<function klucbBern>), tol=0.0001, m=2)[source]

Bases: Policies.RAWUCB.EFF_RAWUCB

Use KL-confidence bound instead of close formula approximation. Experimental work : Much slower (!!) because we compute many UCB at each round per arm)

__init__(nbArms, subgaussian=1, alpha=1, klucb=CPUDispatcher(<function klucbBern>), tol=0.0001, m=2)[source]

New policy.

choice()[source]

Not defined.

__str__()[source]

-> str

__module__ = 'Policies.RAWUCB'
class Policies.RAWUCB.RAWUCB(nbArms, subgaussian=1, alpha=1)[source]

Bases: Policies.RAWUCB.EFF_RAWUCB

Rotting Adaptive Window Upper Confidence Bound (RAW-UCB) [Seznec et al., 2019b, WIP] We use the confidence level :math:`delta_t =

rac{1}{t^lpha}`.

__init__(nbArms, subgaussian=1, alpha=1)[source]

New policy.

__str__()[source]

-> str

__module__ = 'Policies.RAWUCB'
class Policies.RAWUCB.EFF_RAWUCB_asymptotic(nbArms, subgaussian=1, beta=2, m=2)[source]

Bases: Policies.RAWUCB.EFF_RAWUCB

Efficient Rotting Adaptive Window Upper Confidence Bound (RAW-UCB) [Seznec et al., 2019b, WIP] We use the confidence level :math:`delta_t =

rac{1}{t(1+log(t)^Beta)}`.

\(\Beta=2\) corresponds to an asymptotic optimal tuning of UCB for stationnary bandits (Bandit Algorithms, Lattimore and Szepesvari, Chapter 7, https://tor-lattimore.com/downloads/book/book.pdf)

__init__(nbArms, subgaussian=1, beta=2, m=2)[source]

New policy.

__str__()[source]

-> str

_inlog()[source]
__module__ = 'Policies.RAWUCB'