configuration_sparse module

Configuration for the simulations, for single-player sparse bandit.

configuration_sparse.HORIZON = 10000

HORIZON : number of time steps of the experiments. Warning Should be >= 10000 to be interesting “asymptotically”.

configuration_sparse.REPETITIONS = 100

REPETITIONS : number of repetitions of the experiments. Warning: Should be >= 10 to be statistically trustworthy.

configuration_sparse.DO_PARALLEL = True

To profile the code, turn down parallel computing

configuration_sparse.N_JOBS = -1

Number of jobs to use for the parallel computations. -1 means all the CPU cores, 1 means no parallelization.

configuration_sparse.RANDOM_SHUFFLE = False

The arms are shuffled (shuffle(arms)).

configuration_sparse.RANDOM_INVERT = False

The arms are inverted (arms = arms[::-1]).

configuration_sparse.NB_RANDOM_EVENTS = 5

Number of random events. They are uniformly spaced in time steps.

configuration_sparse.UPDATE_ALL_CHILDREN = False

Should the Aggregator policy update the trusts in each child or just the one trusted for last decision?

configuration_sparse.LEARNING_RATE = 0.01

Learning rate for my aggregated bandit (it can be autotuned)

configuration_sparse.UNBIASED = False

Should the rewards for Aggregator policy use as biased estimator, ie just r_t, or unbiased estimators, r_t / p_t

configuration_sparse.UPDATE_LIKE_EXP4 = False

Should we update the trusts proba like in Exp4 or like in my initial Aggregator proposal

configuration_sparse.TEST_Aggregator = False

To know if my Aggregator policy is tried.

configuration_sparse.CACHE_REWARDS = False

Should we cache rewards? The random rewards will be the same for all the REPETITIONS simulations for each algorithms.

configuration_sparse.TRUNC = 1

Trunc parameter, ie amplitude, for Exponential arms

configuration_sparse.MINI = 0

lower bound on rewards from Gaussian arms

configuration_sparse.MAXI = 1

upper bound on rewards from Gaussian arms, ie amplitude = 1

configuration_sparse.SCALE = 1

Scale of Gamma arms

configuration_sparse.NB_ARMS = 15

Number of arms for non-hard-coded problems (Bayesian problems)

configuration_sparse.SPARSITY = 7

Sparsity for non-hard-coded problems (Bayesian problems)

configuration_sparse.LOWERNONZERO = 0.25

Default value for the lower value of non-zero means

configuration_sparse.VARIANCE = 0.05

Variance of Gaussian arms

configuration_sparse.ARM_TYPE

alias of Arms.Gaussian.Gaussian

configuration_sparse.ENVIRONMENT_BAYESIAN = False

True to use bayesian problem

configuration_sparse.MEANS = [0.00125, 0.03660714285714286, 0.07196428571428572, 0.10732142857142857, 0.14267857142857143, 0.1780357142857143, 0.21339285714285713, 0.24875, 0.25375, 0.3775, 0.50125, 0.625, 0.74875, 0.8725, 0.99625]

Means of arms for non-hard-coded problems (non Bayesian)

configuration_sparse.ISSORTED = True

Whether to sort the means of the problems or not.

configuration_sparse.configuration = {'environment': [{'arm_type': <class 'Arms.Gaussian.Gaussian'>, 'params': [(0.05, 0.05, 0.0, 1.0), (0.07142857142857144, 0.05, 0.0, 1.0), (0.09285714285714286, 0.05, 0.0, 1.0), (0.1142857142857143, 0.05, 0.0, 1.0), (0.13571428571428573, 0.05, 0.0, 1.0), (0.15714285714285717, 0.05, 0.0, 1.0), (0.1785714285714286, 0.05, 0.0, 1.0), (0.2, 0.05, 0.0, 1.0), (0.4, 0.05, 0.0, 1.0), (0.47500000000000003, 0.05, 0.0, 1.0), (0.55, 0.05, 0.0, 1.0), (0.625, 0.05, 0.0, 1.0), (0.7000000000000001, 0.05, 0.0, 1.0), (0.7750000000000001, 0.05, 0.0, 1.0), (0.8500000000000001, 0.05, 0.0, 1.0)], 'sparsity': 7}], 'horizon': 10000, 'n_jobs': -1, 'nb_random_events': 5, 'policies': [{'archtype': <class 'Policies.EmpiricalMeans.EmpiricalMeans'>, 'params': {'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.UCBalpha.UCBalpha'>, 'params': {'alpha': 1, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseUCB.SparseUCB'>, 'params': {'alpha': 1, 'sparsity': 7, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.klUCB.klUCB'>, 'params': {'klucb': CPUDispatcher(<function klucbBern>), 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseklUCB.SparseklUCB'>, 'params': {'sparsity': 7, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.Thompson.Thompson'>, 'params': {'posterior': <class 'Policies.Posterior.Beta.Beta'>, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseWrapper.SparseWrapper'>, 'params': {'sparsity': 7, 'policy': <class 'Policies.Thompson.Thompson'>, 'posterior': <class 'Policies.Posterior.Beta.Beta'>, 'use_ucb_for_set_J': True, 'use_ucb_for_set_K': True, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.Thompson.Thompson'>, 'params': {'posterior': <class 'Policies.Posterior.Gauss.Gauss'>, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseWrapper.SparseWrapper'>, 'params': {'sparsity': 7, 'policy': <class 'Policies.Thompson.Thompson'>, 'posterior': <class 'Policies.Posterior.Gauss.Gauss'>, 'use_ucb_for_set_J': True, 'use_ucb_for_set_K': True, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.BayesUCB.BayesUCB'>, 'params': {'posterior': <class 'Policies.Posterior.Beta.Beta'>, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseWrapper.SparseWrapper'>, 'params': {'sparsity': 7, 'policy': <class 'Policies.BayesUCB.BayesUCB'>, 'posterior': <class 'Policies.Posterior.Beta.Beta'>, 'use_ucb_for_set_J': True, 'use_ucb_for_set_K': True, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.BayesUCB.BayesUCB'>, 'params': {'posterior': <class 'Policies.Posterior.Gauss.Gauss'>, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.SparseWrapper.SparseWrapper'>, 'params': {'sparsity': 7, 'posterior': <class 'Policies.Posterior.Gauss.Gauss'>, 'policy': <class 'Policies.BayesUCB.BayesUCB'>, 'use_ucb_for_set_J': True, 'use_ucb_for_set_K': True, 'lower': 0, 'amplitude': 1}}, {'archtype': <class 'Policies.OSSB.OSSB'>, 'params': {'epsilon': 0.0, 'gamma': 0.0}}, {'archtype': <class 'Policies.OSSB.GaussianOSSB'>, 'params': {'epsilon': 0.0, 'gamma': 0.0, 'variance': 0.05}}, {'archtype': <class 'Policies.OSSB.SparseOSSB'>, 'params': {'epsilon': 0.0, 'gamma': 0.0, 'sparsity': 7}}, {'archtype': <class 'Policies.OSSB.SparseOSSB'>, 'params': {'epsilon': 0.001, 'gamma': 0.0, 'sparsity': 7}}, {'archtype': <class 'Policies.OSSB.SparseOSSB'>, 'params': {'epsilon': 0.0, 'gamma': 0.01, 'sparsity': 7}}, {'archtype': <class 'Policies.OSSB.SparseOSSB'>, 'params': {'epsilon': 0.001, 'gamma': 0.01, 'sparsity': 7}}], 'random_invert': False, 'random_shuffle': False, 'repetitions': 100, 'verbosity': 6}

This dictionary configures the experiments

configuration_sparse.LOWER = 0

And get LOWER, AMPLITUDE values

configuration_sparse.AMPLITUDE = 1

And get LOWER, AMPLITUDE values

configuration_sparse.klucbGauss(x, d, precision=0.0)[source]

klucbGauss(x, d, sig2x) with the good variance (= 0.25).

configuration_sparse.klucbGamma(x, d, precision=0.0)[source]

klucbGamma(x, d, sig2x) with the good scale (= 1).