Experiments#

A key part of any computer simulation study is experimentation. Here are set of experiments will be conducted in an attempt to understand and find improvements to the system under study. Experiments essentially vary inputs and process logic.

We can do this manually, but as we develop a model the number of input parameters will increase.

There are several ways you might handle lots of parameters.

  • a python dictionary

  • a custom parameter class

  • a dataclass

All of these approaches work well and it really is a matter of judgement on what you prefer. One downside of a python dict and a custom class is that they are both mutable (although a class can have custom properties where users can only ‘viewable’ attributes). A dataclass can easily be made immutable and requires less code than a custom class, but has the downside that its syntax is a little less pythonic. Here we will build a parameter class called Experiment.

1. Imports#

import numpy as np
import pandas as pd
import simpy
import itertools

2. Notebook level variables, constants, and default values#

A useful first step when setting up a simulation model is to define the base case or as-is parameters. Here we will create a set of constant/default values for our Experiment class, but you could also consider reading these in from a file.

# default resources
N_OPERATORS = 13

# default mean inter-arrival time (exp)
MEAN_IAT = 60 / 100

## default service time parameters (triangular)
CALL_LOW = 5.0
CALL_MODE = 7.0
CALL_HIGH = 10.0

# Seeds for arrival and service time distributions (for repeatable single run)
ARRIVAL_SEED = 42
CALL_SEED = 101

# Boolean switch to simulation results as the model runs
TRACE = False

# run variables
RESULTS_COLLECTION_PERIOD = 1000

3. Distribution classes#

We will also define two distribution classes (Triangular and Expeonential) to encapsulate the random number generation, parameters and random seeds used in the sampling. This simplifies what we will need to include in the Experiment class and as we will see later makes it easier to vary distributions as well as parameters.

class Triangular():
    '''
    Convenience class for the triangular distribution.
    packages up distribution parameters, seed and random generator.
    '''
    def __init__(self, low, mode, high, random_seed=None):
        '''
        Constructor. Accepts and stores parameters of the triangular dist
        and a random seed.
        
        Params:
        ------
        low: float
            The smallest values that can be sampled
            
        mode: float
            The most frequently sample value
            
        high: float
            The highest value that can be sampled
        
        random_seed: int, optional (default=None)
            Used with params to create a series of repeatable samples.
        '''
        self.rand = np.random.default_rng(seed=random_seed)
        self.low = low
        self.high = high
        self.mode = mode
        
    def sample(self, size=None):
        '''
        Generate one or more samples from the triangular distribution
        
        Params:
        --------
        size: int
            the number of samples to return.  If size=None then a single
            sample is returned.
            
        Returns:
        -------
        float or np.ndarray (if size >=1)
        '''
        return self.rand.triangular(self.low, self.mode, self.high, size=size)
class Exponential():
    '''
    Convenience class for the exponential distribution.
    packages up distribution parameters, seed and random generator.
    '''
    def __init__(self, mean, random_seed=None):
        '''
        Constructor
        
        Params:
        ------
        mean: float
            The mean of the exponential distribution
        
        random_seed: int, optional (default=None)
            A random seed to reproduce samples.  If set to none then a unique
            sample is created.
        '''
        self.rand = np.random.default_rng(seed=random_seed)
        self.mean = mean
        
    def sample(self, size=None):
        '''
        Generate a sample from the exponential distribution
        
        Params:
        -------
        size: int, optional (default=None)
            the number of samples to return.  If size=None then a single
            sample is returned.
            
        Returns:
        -------
        float or np.ndarray (if size >=1)
        '''
        return self.rand.exponential(self.mean, size=size)

3. Experiment class#

An experiment class is useful because it allows use to easily configure and schedule a large number of experiments to occur in a loop. We set the class up so that it uses the default variables we defined above i.e. as default the model reflects the as-is process. To run a new experiment we simply override the default values.

One caveat that I recommend is to set any seeds you use to None. This means that by default the experiment will produce different results each time.

class Experiment:
    '''
    Parameter class for 111 simulation model
    '''
    def __init__(self, n_operators=N_OPERATORS, mean_iat=MEAN_IAT, 
                 call_low=CALL_LOW, call_mode=CALL_MODE, call_high=CALL_HIGH,
                 arrival_seed=None, call_seed=None):
        '''
        The init method sets up our defaults. 
        '''
        self.n_operators = n_operators
        self.arrival_dist = Exponential(mean_iat, random_seed=arrival_seed)
        self.call_dist = Triangular(call_low, call_mode, call_high, 
                                    random_seed=call_seed)
        
        # variable used to store results of experiment
        self.results = {}
        self.results['waiting_times'] = []
        
        # total operator usage time for utilisation calculation.
        self.results['total_call_duration'] = 0.0
        
        # resources: we must init resources inside of the arrivals process.  
        # but we will store a placeholder for them for transparency
        self.operators = None

3.1. Creating a default experiment#

To use Experiment is very simple. For example to create a default scenario we would use the following code

env = simpy.Environment()
default_scenario = Experiment()

Due to the way python works we can access all of the experiment variables from the default_scenario object. For example the following code will generate an inter-arrival time:

default_scenario.arrival_dist.sample()
0.2664270263455266
default_scenario.n_operators
13

3.2 Creating an experiment with more call operators#

To change parameters in an experiment we just need to include a new value when we create the Experiment. For example if we wanted to increase the number of servers to 14. We use the following code:

env = simpy.Environment()
extra_server = Experiment(n_operators=14)
extra_server.n_operators
14

4. Modified model code#

We will modify the model code and logic that we have already developed. The functions for service and arrivals will now accept an Experiment argument.

Note that at this point you could put all of the code into a python module and import the functions and classes you need into an experiment workbook.

def trace(msg):
    '''
    Turing printing of events on and off.
    
    Params:
    -------
    msg: str
        string to print to screen.
    '''
    if TRACE:
        print(msg)
def service(identifier, args, env):
    '''
    simulates the service process for a call operator

    1. request and wait for a call operator
    2. phone triage (triangular)
    3. exit system
    
    Params:
    ------
    
    identifier: int 
        A unique identifer for this caller
        
    experiment: Experiment
        The settings and input parameters for the current experiment
        
    env: simpy.Environment
        The current environent the simulation is running in
        We use this to pause and restart the process after a delay.
    
    '''
    
    # record the time that call entered the queue
    start_wait = env.now
    
    # request an operator
    with args.operators.request() as req:
        yield req
        
        # record the waiting time for call to be answered
        waiting_time = env.now - start_wait
        
        # ######################################################################
        # MODIFICATION: store the results for an experiment 
        args.results['waiting_times'].append(waiting_time)
        # ######################################################################

        trace(f'operator answered call {identifier} at ' \
              + f'{env.now:.3f}')

        # ######################################################################
        # MODIFICATION: the sample distribution is defined by the experiment.
        call_duration = args.call_dist.sample()       
        # ######################################################################
        
        # schedule process to begin again after call_duration
        yield env.timeout(call_duration)
        
        # update the total call_duration 
        args.results['total_call_duration'] += call_duration

        # print out information for patient.
        trace(f'call {identifier} ended {env.now:.3f}; ' \
              + f'waiting time was {waiting_time:.3f}')
def arrivals_generator(env, args):
    '''
    IAT is exponentially distributed

    Parameters:
    ------
    env: simpy.Environment
        The simpy environment for the simulation

    experiment: Experiment
        The settings and input parameters for the simulation.
    '''    
    # use itertools as it provides an infinite loop 
    # with a counter variable that we can use for unique Ids
    for caller_count in itertools.count(start=1):

        # ######################################################################
        # MODIFICATION:the sample distribution is defined by the experiment.
        inter_arrival_time = args.arrival_dist.sample()
        ########################################################################
        
        yield env.timeout(inter_arrival_time)

        trace(f'call arrives at: {env.now:.3f}')

        # ######################################################################
        # MODIFICATION: we pass the experiment to the service function
        env.process(service(caller_count, args, env))
        # ######################################################################

5. A single run wrapper function#

def single_run(experiment, rc_period=RESULTS_COLLECTION_PERIOD):
    '''
    Perform a single run of the model and return the results
    
    Parameters:
    -----------
    
    experiment: Experiment
        The experiment/paramaters to use with model
    '''

    # results dictionary.  Each KPI is a new entry.
    results = {}
    
    # environment is (re)created inside single run
    env = simpy.Environment()

    # we create simpy resource here - this has to be after we
    # create the environment object.
    experiment.operators = simpy.Resource(env, capacity=experiment.n_operators)
    
    # we pass the experiment to the arrivals generator
    env.process(arrivals_generator(env, experiment))
    env.run(until=rc_period)

    # end of run results: calculate mean waiting time
    results['01_mean_waiting_time'] = \
        np.mean(experiment.results['waiting_times'])
    
    # end of run results: calculate mean operator utilisation
    results['02_operator_util'] = (experiment.results['total_call_duration'] \
                           / (rc_period * experiment.n_operators)) * 100.0
    
    print(f'Experiment complete')

    # return the results from the run of the model
    return results
TRACE = False
default_scenario = Experiment()
results = single_run(default_scenario)
print(f"Mean waiting time: {results['01_mean_waiting_time']:.2f} hrs \n"
      + f"Operator Utilisation {results['02_operator_util']:.2f}%")
Experiment complete
Mean waiting time: 6.11 hrs 
Operator Utilisation 96.42%

6. Multiple experiments#

The single_run wrapper function for the model and the Experiment class mean that is very simple to run multiple experiments. We will define two new functions for running multiple experiments:

  • get_experiments() - this will return a python dictionary containing a unique name for an experiment paired with an Experiment object

  • run_all_experiments() - this will loop through the dictionary, run all experiments and return combined results.

def get_experiments():
    '''
    Creates a dictionary object containing
    objects of type `Experiment` to run.
    
    Returns:
    --------
    dict
        Contains the experiments for the model
    '''
    experiments = {}
    
    # base case
    # we will sync scenarios by using seeds
    experiments['base'] = Experiment(arrival_seed=ARRIVAL_SEED, 
                                     call_seed=CALL_SEED)
    
    # +1 extra capacity
    experiments['operators+1'] = Experiment(arrival_seed=ARRIVAL_SEED, 
                                            call_seed=CALL_SEED,
                                            n_operators=N_OPERATORS+1)
    
    return experiments
def run_all_experiments(experiments, rc_period=RESULTS_COLLECTION_PERIOD):
    '''
    Run each of the scenarios for a specified results
    collection period and replications.
    
    Params:
    ------
    experiments: dict
        dictionary of Experiment objects
        
    rc_period: float
        model run length
    
    '''
    print('Model experiments:')
    print(f'No. experiments to execute = {len(experiments)}\n')

    experiment_results = {}
    for exp_name, experiment in experiments.items():
        
        print(f'Running {exp_name}', end=' => ')
        results = single_run(experiment, rc_period)
        print('done.\n')
        
        #save the results
        experiment_results[exp_name] = results
    
    print('All experiments are complete.')
    
    # format the results
    return pd.DataFrame(experiment_results)
# get the experiments
experiments = get_experiments()

#run the scenario analysis
experiment_results = run_all_experiments(experiments)
Model experiments:
No. experiments to execute = 2

Running base => Experiment complete
done.

Running operators+1 => Experiment complete
done.

All experiments are complete.
experiment_results.round(2)
base operators+1
01_mean_waiting_time 1.67 0.72
02_operator_util 91.99 85.52