Policies.klUCBHPlus module

The improved kl-UCB-H+ policy, for one-parameter exponential distributions. Reference: [Lai 87](https://projecteuclid.org/download/pdf_1/euclid.aos/1176350495)

class Policies.klUCBHPlus.klUCBHPlus(nbArms, horizon=None, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=1.0, lower=0.0, amplitude=1.0)[source]

Bases: Policies.klUCB.klUCB

The improved kl-UCB-H+ policy, for one-parameter exponential distributions. Reference: [Lai 87](https://projecteuclid.org/download/pdf_1/euclid.aos/1176350495)

__init__(nbArms, horizon=None, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=1.0, lower=0.0, amplitude=1.0)[source]

New generic index policy.

  • nbArms: the number of arms,

  • lower, amplitude: lower value and known amplitude of the rewards.

horizon = None

Parameter \(T\) = known horizon of the experiment.

__str__()[source]

-> str

computeIndex(arm)[source]

Compute the current index, at time t and after \(N_k(t)\) pulls of arm k:

\[\begin{split}\hat{\mu}_k(t) &= \frac{X_k(t)}{N_k(t)}, \\ U_k(t) &= \sup\limits_{q \in [a, b]} \left\{ q : \mathrm{kl}(\hat{\mu}_k(t), q) \leq \frac{c \log(T / N_k(t))}{N_k(t)} \right\},\\ I_k(t) &= U_k(t).\end{split}\]

If rewards are in \([a, b]\) (default to \([0, 1]\)) and \(\mathrm{kl}(x, y)\) is the Kullback-Leibler divergence between two distributions of means x and y (see Arms.kullback), and c is the parameter (default to 1).

computeAllIndex()[source]

Compute the current indexes for all arms, in a vectorized manner.

__module__ = 'Policies.klUCBHPlus'