Environment.Result module

Result.Result class to wrap the simulation results.

class Environment.Result.Result(nbArms, horizon, indexes_bestarm=-1, means=None)[source]

Bases: object

Result accumulators.

__init__(nbArms, horizon, indexes_bestarm=-1, means=None)[source]

Create ResultMultiPlayers.

choices = None

Store all the choices.

rewards = None

Store all the rewards, to compute the mean.

pulls = None

Store the pulls.

indexes_bestarm = None

Store also the position of the best arm, XXX in case of dynamically switching environment.

running_time = None

Store the running time of the experiment.

memory_consumption = None

Store the memory consumption of the experiment.

number_of_cp_detections = None

Store the number of change point detected during the experiment.

store(time, choice, reward)[source]

Store results.

change_in_arms(time, indexes_bestarm)[source]

Store the position of the best arm from this list of arm.

  • From that time t and after, the index of the best arm is stored as indexes_bestarm.


FIXME This is still experimental!

__dict__ = mappingproxy({'__module__': 'Environment.Result', '__doc__': ' Result accumulators.', '__init__': <function Result.__init__>, 'store': <function Result.store>, 'change_in_arms': <function Result.change_in_arms>, '__dict__': <attribute '__dict__' of 'Result' objects>, '__weakref__': <attribute '__weakref__' of 'Result' objects>})
__module__ = 'Environment.Result'

list of weak references to the object (if defined)