Policies.klUCB_forGLR module¶
The generic KL-UCB policy for one-parameter exponential distributions, using a different exploration time step for each arm (\(\log(t_k) + c \log(\log(t_k))\) instead of \(\log(t) + c \log(\log(t))\)).
- It is designed to be used with the wrapper - GLR_UCB.
- By default, it assumes Bernoulli arms. 
- Reference: [Garivier & Cappé - COLT, 2011](https://arxiv.org/pdf/1102.2490.pdf). 
- 
Policies.klUCB_forGLR.c= 3¶
- Default value when using \(f(t) = \log(t) + c \log(\log(t))\), as - klUCB_forGLRis inherited from- klUCBloglog.
- 
Policies.klUCB_forGLR.TOLERANCE= 0.0001¶
- Default value for the tolerance for computing numerical approximations of the kl-UCB indexes. 
- 
class Policies.klUCB_forGLR.klUCB_forGLR(nbArms, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=3, lower=0.0, amplitude=1.0)[source]¶
- Bases: - Policies.klUCBloglog.klUCBloglog- The generic KL-UCB policy for one-parameter exponential distributions, using a different exploration time step for each arm (\(\log(t_k) + c \log(\log(t_k))\) instead of \(\log(t) + c \log(\log(t))\)). - It is designed to be used with the wrapper GLR_UCB.
- By default, it assumes Bernoulli arms. 
- Reference: [Garivier & Cappé - COLT, 2011](https://arxiv.org/pdf/1102.2490.pdf). 
 
 
- It is designed to be used with the wrapper 
 - 
__init__(nbArms, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=3, lower=0.0, amplitude=1.0)[source]¶
- New generic index policy. - nbArms: the number of arms, 
- lower, amplitude: lower value and known amplitude of the rewards. 
 
 - 
t_for_each_arm= None¶
- Keep in memory not only the global time step \(t\), but also let the possibility for - GLR_UCBto use a different time steps \(t_k\) for each arm, in the exploration function \(f(t) = \log(t_k) + 3 \log(\log(t_k))\).
 - 
computeIndex(arm)[source]¶
- Compute the current index, at time t and after \(N_k(t)\) pulls of arm k: \[\begin{split}\hat{\mu}_k(t) &= \frac{X_k(t)}{N_k(t)}, \\ U_k(t) &= \sup\limits_{q \in [a, b]} \left\{ q : \mathrm{kl}(\hat{\mu}_k(t), q) \leq \frac{\log(t_k) + c \log(\log(t_k))}{N_k(t)} \right\},\\ I_k(t) &= U_k(t).\end{split}\]- If rewards are in \([a, b]\) (default to \([0, 1]\)) and \(\mathrm{kl}(x, y)\) is the Kullback-Leibler divergence between two distributions of means x and y (see - Arms.kullback), and c is the parameter (default to 1).- Warning - The only difference with - klUCBis that a custom \(t_k\) is used for each arm k, instead of a common \(t\). This policy is designed to be used with- GLR_UCB.
 - 
__module__= 'Policies.klUCB_forGLR'¶