Scope and Structure

The model consists of a number of discrete ‘functions’ (or unit operations), connected via streams. A function can be broadly described as an activity converting one or more input streams into one or more output streams, usually requiring one or more utilities. A utility can be broadly defined as a resource that commands an economic value. Each function is defined in terms of (a) the relationship between its input streams and output streams, and (b) the quantity of utilities required. For a function representing a process, this involves carrying out a mass and energy balance around the process. If a stream can be used as an intermediate or final product it can also exit the model as an output. Each utility and output has an assigned economic value.

The model uses one large input dataset, split into three sections: one section to define the configuration of the model; one containing the variables used to define the functions; and one containing the variables used to define the economic values for utilities and outputs. A library of datasets can be created, each representing different projects, scenarios and locations. The input dataset is simply selected from this library. Datasets can be created, copied, or deleted as required.

The ultimate aim of the model is to represent all major processes involved in the production of end-products from sugar cane. At the present time, the model extends from cane growing through to refined sugar and ethanol production, i.e. it comprises the plantation, factory, refinery and distillery. The current scope can be represented in the block diagram below (click to enlarge).

Static/Variable Modelling

The methodology as described above represents a static model with fixed input data. This is not a true reflection of reality, as there are few (if any) variables which one could define with 100% confidence. To account for this uncertainty, Monte Carlo simulation is built into the model. The fixed input data can be replaced by probability distributions, reflecting a range of possible values. Correlations can be added, to take into account dependencies between variables where the exact nature of the relationship is not known. The model can then be simulated hundreds or thousands of times, with the input data varying each time according to the defined probability distributions and correlations. The output data from each simulation is stored and processed so that each output can be viewed in terms of a probability distribution, rather than one fixed value. This provides visibility of risk and enables rigorous sensitivity analysis to be carried out.

This methodology is illustrated in the simple diagrams below. (Click to enlarge, and click on the top-right or top-left of the enlarged picture to scroll.)

Software 

The model is built in Microsoft Excel. This provides four significant benefits. Firstly, most computer users are familiar with Excel. Secondly, any of Excel's myriad of built-in tools can be used for analysis. Thirdly, the modular nature of Excel allows the model and its results to be seamlessly linked to other Excel worksheets or workbooks. And lastly, it allows Palisade's @RISK software, an Excel add-in and the world's leading risk and decision analysis software, to be used for Monte Carlo simulation.

Any model of this nature will contain a large number of circular references, and hence require a sequence of iterations to converge to a solution. Running the model on a modern computer using Excel 2010 results in a convergence time of typically two seconds.