Different steps used in development of Simulation models
A simulation of a system is the operation of a model of the system; “Simulation Model”. The steps involved in developing a simulation model, designing a simulation experiment, and performing simulation analysis are:
Step 1. Identify the Problem: Enumerate problems with an existing system. Produce requirements for a proposed system.
Step 2. Formulate the Problem: Select the bounds of the system, the problem, or a part thereof, to be studied. Define overall objective of the study and a few specific issues to be addressed. Define performance measures – quantitative criteria based on which different system configurations will be compared and ranked. Identify, briefly at this stage, the configurations of interest and formulate hypotheses about system performance. Decide the time frame of the study. Identify the end-user of the simulation model.
Step 3. Collect and Process Real System Data: Collect data on system specifications, input variables, as well as the performance of the existing system.
Step 4. Formulate and Develop a Model: Develop schematics and network diagrams of the system. Translate these conceptual models to simulation software acceptable form. Verify that the simulation model executes as intended. Verification techniques include traces, varying input parameters over their acceptable range and checking the output, substituting constants for random variables, and manually checking results, and animation.
Step 5. Validate the Model: Compare the model’s performance under known conditions with the performance of the real system. Perform statistical inference tests and get the model examined by system experts. Assess the confidence that the end-user places on the model and address problems if any.
Step 6. Document Model for Future Use: Document objectives, assumptions, and input variables in detail. Document the experimental design.
Step 7. Select Appropriate Experimental Design: Select a performance measure, a few input variables that are likely to influence it, and the levels of each input variable. Generally, in stationary systems, the steady-state behaviour of the response variable is of interest. Ascertain whether a terminating or a nonterminating simulation run is appropriate. Select the run length. Select appropriate starting conditions. Select the length of the warm-up period, if required.
Decide the number of independent runs – each run uses a different random number stream and the same starting conditions – by considering output data sample size. The sample size must be large enough (at least 3-5 runs for each configuration) to provide the required confidence in the performance measure estimates. Alternately, use common random numbers to compare alternative configurations by using a separate random number stream for each sampling process in a configuration. Identify output data most likely to be correlated.
Step 8. Establish Experimental Conditions for Runs: Address the question of obtaining accurate information and the most information from each run. Determine if the system is stationary (performance measure does not change over time) or non-stationary (performance measure changes over time).
Step 9. Perform Simulation Runs: Perform runs according to steps 7-8 above.
Step 10. Interpret and Present Results: Compute numerical estimates (e.g., mean,confidence intervals) of the desired performance measure for each configuration of interest. Test hypotheses about system performance. Construct graphical displays (e.g., pie charts, histograms) of the output data. Document results and conclusions.
Step 11. Recommend Further Courses of Action: This may include further experiments to increase the precision and reduce the bias of estimators, to perform sensitivity analyses, etc.