Developing a workﬂow to represent fractured carbonate reservoirs for simulation models under uncertainties based on ﬂow unit concept

. Description of fractured reservoir rock under uncertainties in a 3D model and integration with reservoir simulation is still a challenging topic. In particular, mapping the potential zones with a reservoir quality can be very useful for making decisions and support development planning. This mapping can be done through the concept of ﬂow units. In this paper, an integrated approach including a Hierarchical Cluster Analysis (HCA), geostatistical modeling and uncertainty analysis is developed and applied to a fractured carbonate in order to integrate on numerical simulation. The workﬂow begins with different HCA methods, performed to well-logs in three wells, to identify ﬂow units and rock types. Geostatistical techniques are then applied to extend the ﬂow units, petrophysical properties and fractures into the inter-well area. Finally, uncertainty analysis is applied to combine different types of uncertainties for generating ensemble reservoir simulation models. The obtained clusters from different HCA methods are evaluated by the cophenetic coefﬁcient, correlation coefﬁcient, and variation coefﬁcient, and the most appropriate clustering method is used to identify ﬂow units for geostatistical modeling. We subsequently deﬁne uncertainties for static and dynamic properties such as permeability, porosity, net-to-gross, fracture, water-relative permeability, ﬂuid properties, and rock compressibility. Discretized Latin Hypercube with Geostatistical (DLHG) method is applied to combine the deﬁned uncertainties and create an ensemble of 200 simulation models which can span the uncertainty space. Eventu-ally, a base production strategy is deﬁned under operational conditions to check the consistency and reliability of the models created with UNISIM-II-R (reference model) as a real reservoir with known results. Results represent the compatibility of the methodology to characterize fractured reservoirs since those models are consis-tent with the reference model (used to generate the simulation models). The proposed workﬂow provides an efﬁcient and useful means of supporting development planning under uncertainty.


Introduction
Modeling and simulation of fluid flow in the naturally fractured reservoir have been a significant topic in the petroleum industry. The huge potential hydrocarbon reserve in the fractured reservoirs has been a major motivation to develop this field of study. Bourbiaux (2010) described the specificities of some these well-known naturally fractured reservoirs. Hence, many efforts had been taken to represent and address some of the challenges to build geological fractured models and integrate them into the numerical simulation (Delorme et al., 2014;Bourbiaux, 2010a, 2010b;Noetinger et al., 2016). A robust 3D model of a fractured reservoir is necessary for the simulation of flow behavior and prediction of production performance. In most cases, the geological zonation methods cannot provide a suitable image of heterogeneous trends of carbonate reservoirs. Thus, given the fluid production conditions, a suitable zonation is essential for a better reservoir characterization.
The flow unit conception has widely been performed in the reservoir representation and modeling. Flow units are defined as some regions in a reservoir that are horizontally continuous and homogeneous in terms of petrophysical features (Abbaszadeh et al., 1996;Enayati-Bidgoli and Rahimpour-Bonab, 2016;Hearn et al., 1984;Lopez and Aguilera, 2015). Hence, different methods have been applied to identify flow units based on different geological conditions, data limitation, and study objectives. Most researchers have applied Flow Zone Index (FZI) using petrophysical data and then classify the obtained FZI into the different groups, such as flow units (Al-ajmi et al., 2000;Aminian et al., 2003;Soto et al., 2001;Svirsky et al., 2004). Aminian et al. (2003) delivered a neural network technique to determine flow unit types and predict the petrophysical parameters in the reservoir. Mahjour et al. (2016), made a comparison of different methods to identify flow units in the Tabnak gas field in southern Iran. They concluded that Hierarchical Cluster Analysis (HCA) is more efficient than the other methods used for the general assessment of flow units in field scale. Their integrated model is in accordance with the well log analyses and core data results.
Flow units are also applied to divide a reservoir into different zones which are suitable for the simulation process, field development (Shan et al., 2018) and production strategies (Enayati-Bidgoli et al., 2014). A 3D flow unit modeling is generally utilized in the field exploration and development planning, ultimate oil recovery prediction, well placement optimization, and production strategies (Enayati-Bidgoli et al., 2014).
In recent years, the quantification and understanding of uncertainty types in the fractured carbonates have become increasingly important for flow unit modeling and integration with reservoir simulation. Petrophysical models are created by well-log and core data within a short interval. Lateral flow unit changes are expected among the wells, given these long inter-well distances, which present a challenge for the modeling process. Since, in petrophysical modeling, the main parameter that controls the petrophysical distributions is flow unit, these properties are expected to change along with flow units over short distances. The distribution of petrophysical parameters depends on flow units; therefore, the uncertainty of flow unit distribution will introduce additional uncertainty to these parameters. Furthermore, other types of uncertainties should be considered during fracture modeling (e.g., fracture spacing, fracture length, fracture aperture) and fluid flow modeling (e.g., relative permeability curves, rock compressibility, PVT data) to obtain a comprehensive uncertainty analysis for making better reservoir simulation models and field development decisions.

Objective
The purpose of this study is the representation of a fractured reservoir model, based on flow unit concept and integration with the numerical simulation to generate an ensemble of reservoir models under different types of uncertainties in an initial field management phase. In addition, differences between the created models and the UNISIM-II-R model, working as a real reservoir with known results, are highlighted in order to check the consistency and reliability of the created models for the later stages of field development.
3 Model data UNISIM-II-R is a reference model which was constructed based on structural, facies and petrophysical model, using geological and rock/fluid data from Brazilian pre-salt reservoir data, Ghawar field information, real carbonate reservoir (Field A) and synthetic data (Correia et al., 2015). The structural aspect, including horizons, reservoir boundary and 16 faults, was previously defined from a real carbonate reservoir. The size of each grid in the reference model is 50 · 50 · 1 m. Known results from this highresolution grid model were obtained to test and compare other methodologies.

Methodology
The work structure is divided into four steps: (1) zonation of the reservoir into flow units using HCA, (2) construction of a base case static model using geostatistical techniques, (3) integration of different types of uncertainties to generate ensemble reservoir simulation models using Discretized Latin Hypercube with Geostatistical (DLHG), and (4) evaluation of several objective functions based on UNISIM-II-R model to check the consistency and reliability of the created models.

Identification of flow units using HCA method
Hierarchical clustering is an approach for data classification using hierarchical dendrogram. This technique mostly helps to divide a ''pile'' of information into the meaningful groups based on the group's similarity. These obtained groups are internally homogeneous and externally heterogeneous (Mahjour et al., 2016;Stinco et al., 2001). The base of clustering method is the distance between two data objects in one matrix to measure the similarity of data. There are different distance measures available in the literature (Hatampour et al., 2015;Pandit and Gupta, 2011). In this paper, the existing distance measures such as Euclidean, Manhattan, Pearson, Squared Euclidean and Squared Pearson distance are used. After the calculation of distances between different variables, grouping the elements into a hierarchical cluster tree is required. In this step, the pairs of objects are needed to be linked together using the linkage function (Hatampour et al., 2015). The obtained distance information from the previous step is used by the linkage function to determine the proximity of objects to each other. Then, the highest similarities between the data pairs are computed in order to generate and connect the larger clusters given the newly built clusters. Several linkage functions carried out for this work such as single linkage, complete linkage, average linkage, ward linkage, and median linkage.
If the scales of elements are different, the elements with the large values contribute more to the distance measure than to elements with small values. Hence, normalization is particularly significant when the elements use different scales. Suppose element A is on a scale in permeability from 0 to 5000 mD and element B is on a scale in porosity from 0 to 1. If the data is not normalized, then the cluster observations procedure places greater weight on elements A than on elements B due to the higher values of its scale, which is not the desired result. Therefore, the elements with different scales should be normalized.
Meanwhile, it is significant to know the number of clusters in advance. There have been different proposals to obtain this gaol such as rule of thumb, elbow method, information criterion approach, and cross-validation (Trupti and Makwana, 2013). Elbow method is applied to select the number of clusters in the current study. Elbow method is one of the most common methods to determine what number of clusters that is optimal. It has this name because the generated curve using this method looks like the shape of an ''arm'' and we always look for the ''elbow'' to define the acceptable number of clusters based on the sample data. The elbow criterion shows how many numbers of clusters are enough so that adding another cluster does not add enough information. In this method, there is a curve in which the x-axis is the clusters number and on the y-axis is the distortion percentage or percentage of explained variance. The percentage of explained variance is the ratio of between group variance and total variance. The group variance is accounted by the distance values (Eq. (1)). The total variance is also the sum of the variance for all groups: where l is the population mean, N is the population size, and X i is the distance. After selecting the right number of clusters, the cophenetic coefficient is applied to validate the precision of clusters and their linkage for dendrogram construction. In this case, the data linkage should have a close relation with their distance. In the other words, the cophenetic coefficients are applied to evaluate and investigate for the hierarchical clustering techniques with different distance and linkage functions. If the value of the cophenetic coefficient is closer to 1, it means the higher quality of the cluster. The cophenetic coefficient is defined as in equation (2) between Z and Y: where, Y ij the distance between ''i'' and ''j'' variables in Y, and Z ij the cophenetic distance between ''i'' and ''j'' variables from Z, where ''y'' and ''z'' are the average values of Y and Z, respectively. Meanwhile, the porosity-permeability plots in the carbonate reservoirs show a large scatter and the value of the correlation coefficient between them is low. Therefore, a better porosity-permeability relationship is determined if the rocks with the same fluid flow behaviour are identified and grouped together.
Another parameter to evaluate each flow unit is the Variation Coefficient (C v ) in order to quantify heterogeneity, which is one of the common methods to measure the static heterogeneities as a simple statistical technique. Since permeability affects the flow and displacement far more than the other properties, the heterogeneity measures are almost just applied to permeability data (Jerry et al., 1997). Variabilities can be compared for different flow units and sampling plans can be adjusted for the present variability, where (C v ) is a measure of variability in proportion to the mean value. The most commonly used equation to calculate the variation coefficient is shown below, in equation (3), although numerous variations of this approach can be found in published literature: where, C v is the variation coefficient, ffiffiffiffi ffi r 2 p is the standard deviation and " x is the permeability mean. Corbett and Jensen (1992) delivered a C v values assortment based on a lot of samples of permeability data from different types of reservoirs: During hierarchical cluster clustering, the result is illustrated on a tree diagram named dendrogram in order to represent the clusters formed by grouping observations at each step and their levels of similarity. In this diagram, the vertical axis shows the level of similarity between the objects in each cluster and horizontal axis illustrates the objectives.

Construction of a 3D base case static model
Building a 3D basic reservoir model using stochastic simulation techniques is required to produce more realistic images of reservoir heterogeneity based on existing uncertainties. The structural model, property model (flow units, porosity, permeability, and NTG) and fracture network modeling are used to generate a basic 3D model of the reservoir. The basic workflow of static geological modeling is shown in Figure 1. It includes seven phases: 1. Defining wells and fault locations and applying flow units and petrophysical data in order to build structural, flow unit and petrophysical models. 2. Structural modeling based on the stratigraphic layering to get an appropriate geostatistical grid according to well-known heterogeneities. 3. Flow unit modeling using the object modeling approach for Super-K (the highest permeability flow unit) and, Sequential Indicator Simulation (SIS) algorithm for other flow units. Generally, the 3D realistic image of reservoirs can be provided by SIS method. 4. Petrophysical modeling for porosity, permeability, and NTG based on flow units, using the Sequential Gaussian Simulation (SGS) algorithm. SGS method is a kriging-based technique in which un-sampled locations are checked in a random order until all are checked (Schiozer et al., 2017). 5. Fracture modeling using Discrete Fracture Network (DFN) method. DFN modeling and conversion to effective properties for simulation process is done directly into the grids. 6. Upscaling procedure to decrease simulation time. It is required to scale the high-resolution reservoir model to the coarser resolution of the production simulation.
7. Building and validating the base case model as the first model generated, based on statistical and geological consistency.

Integrating uncertainties to generate ensemble reservoir simulation models
The methodology presented in this stage is integrating uncertainties to generate several simulation models which ensure a better prediction of the field behavior and can be applied to define strategies for the management of the field. The main idea is to use the observed data to describe the uncertainty variables and check the consistency of the created models during the initial phase of the field development. The general steps of the methodology (Fig. 2) are summarized below: 1. Characterizing the distribution, variation range and number of uncertain levels (in the case of discrete attributes) for static and dynamic parameters. 2. Integration of all types of uncertainty using a statistical approach. In this paper, DLHG method (Schiozer et al., 2017) is applied. This method is applied to generate multiple realizations for the properties which are related to the geological model. The method proposed here is not complex and only requires a few runs to show the variability of geostatistical realizations. The DLHG method was successfully applied in several cases ( This procedure is repeated until the results fulfill a userdefined criterion. For example, this criterion can be based on the dispersion of the production and pressure curves, given the history.

Results and discussion
The results and discussion section is related to the steps of the methodology: 1. The main input data of HCA method is Reservoir Quality Index (RQI) and normalized porosity (; z Þ (Abbaszadeh et al., 1996) of three wells studied (Wildcat Well, Exploration Well1 and, Exploration Well2) from the UNISIM-II model. The variables are normalized because of their different scales. According to the elbow method, the range of cluster number which is located on the x-axis is from 1 to 433, the total amount of data. The zoomed resulted plot is represented in Figure 3, where it shows that there is a slump in the percentage of explained   (2019) variance at point six and therefore the number of clusters for this study is six.
The cophenetic coefficient values are calculated based on applied distance measures and linkage functions. The results show that the amount of different distance measures for each linkage function is constant. Therefore, each of the various distance measures can be applied for the clustering of data without having any effect on the performance of the methodology. Furthermore, the amount of the cophenetic coefficients for the average linkage, complete linkage, median linkage, single linkage, and ward linkage are 0.87, 0.87, 0.80, 0.83, and 0.89 respectively. Hence, the obtained value of the cophenetic coefficient from ward linkage function method is more than the other applied functions, and it is the most appropriate linkage to cluster the corresponding data (cophenetic coefficient: 89%). Euclidean distance measure and ward linkage are used to identify the flow units and generate geostatistical models in this study. Moreover, the porosity-permeability correlation coefficients for all clusters abstained from Euclidean distance and Ward linkage are averagely high (Fig. 4) that means these methods are significantly useful for the zonation of reservoirs into the flow units with the high correlation coefficients between porosity and permeability data. As the last step of flow units evaluation, permeability data from three wells are used to quantify the heterogeneity measures. According to Table 1, the variation coefficient is used to show the value of heterogeneity in each flow unit and the total data. The permeability of entire data is clearly more heterogeneous (C v = 3.38) than the permeability data for each flow unit. This reflects that cluster analysis is a useful approach to simplify the reservoir heterogeneity into the homogeneous groups of flow units.
In the next step, the dendrogram is created using ward linkage and Euclidean distance, based on the obtain number of clusters from elbow method. The diagram is illustrated in Figure 5 using a statistical software.
2. According to UNISIM-II benchmark model, there is a non-reservoir zone as a segregated unit and it must be added to the six obtained flow units from cluster analysis to build a 3D base case flow unit model. The porosity and permeability of non-reservoir zone are zero (totally seven units). Collected data in this work for building a 3D base case static model includes well and fault locations, wells path and top, petrophysical property and flow unit types. The structural model used to build a base case model has the same model of UNISIM-II-R (Correia et al., 2015), but the geostatistical modeling is limited to well log information from three wells (Wildcat Well, Exploration Well1, and Exploration Well2).
Petrophysical parameters are in the form of logging data in the well trajectory. The data must be scaled up and averaged in the cell size defined in the reservoir before 3D modeling. It provides a value for different data for each cell. There are several methods to define the average of petrophysical parameters. Usually, the arithmetic averaging method is the best one due to static nature of porosity and fracture intensity, and geometric averaging method is suitable for permeability data. For discrete well logs (e.g., facies or flow units), the average method ''Most of'' is recommended. Figure 6 shows upscaling well log for porosity data.
To create a realistic distribution of flow units in the outcrop geostatistical model, the 3D grids are filled with the flow unit data from the measured parts. The obtained models have been shown by a vertical flow units proportion curve (Fig. 7). The main controls to generate the outcrop flow unit model are the semi-variograms based on the outcrop stratigraphic sections. Index semi-variograms are built in several directions to identify the orientation with the best spatial continuity. There is a strong linkage between the diversity of semi-variograms and geological attributes such as the vertical layering of the outcrop, changes of lateral flow unit, and topographical results. The NW-SE direction is the most appropriate continuity of flow units and is defined as the major axis for the semivariogram. The NE-SW direction, which has less continuity, is defined as the minor direction (Fig. 8). Table 2 describes the anisotropy range of the matrix system. The anisotropy range is applied to each flow unit distribution. The range is the maximum distance (meters) where sample values are related to each other. A smaller value shows a higher anisotropy. For the base case flow unit modeling, SIS algorithm with the vertical trend is done for all flow units, except flow unit 7 (Super-K) (Fig. 9). Super-K flow unit is randomly generated by the objects modeling method (Fig. 10). Super-K modeling is performed separately from the other flow units because of its post-depositional genesis, related to diagenetic events. Given their random distribution, Super-K features are not fully conjunct. The aim is that background matrix and fractures support the connection between these properties, as in the Ghawar Field. Flow unit modeling must preserve all geological information of the reservoir, containing body form, dimensions, and spatial trends. Thus, a histogram analysis is performed to check the quality of the results during the flow unit modeling procedure. Figure 11 illustrates a comparison of well logs, regularized well logs (upscaled) and flow units distribution in the entire reservoir. Based on this statistical analysis, it is clear that the flow unit modeling is well done, and well logs, regularized well logs and flow units distribution in the whole reservoir are almost matched. For more realistic reservoir model, it is required to link simulated flow units and petrophysical parameters, such as porosity and permeability, according to flow units distribution for constraining the simulations in the inter-well areas.
Under the control by the structural and flow unit models, petrophysical models are built for reservoir parameters, including porosity, permeability, and NTG. A 3D stochastic modeling SGS, is done to perform the petrophysical modeling combining well data, anisotropy range (Table 2) and 3D flow unit model to control the porosity and permeability distribution. For base case porosity and permeability modeling, we apply a normal distribution and lognormal distribution respectively. The permeability data is stochastically modeled as a function of porosity using the collocated co-kriging method. Interval flow unit cut-off is applied to calculate NTG of each zone within the model. The flow unit types are associated with porosity and this is the criteria used to set up reservoir NTG. NTG property is defined as 1 for all permeable flow units, and 0 for flow unit 6 or non-reservoir zone. The base case porosity modeling is illustrated in Figure 12. Figures 13 and 14 represent the horizontal permeability for matrix system and NTG model, respectively.
After petrophysical modeling, it is necessary to check the quality of the modeling results. Therefore, the histogram of porosity and horizontal permeability for well logs, the upscaled well logs and the modeled porosity-permeability distribution in the whole reservoir are made, as it can be seen in Figures 15 and 16.
Generally, there is minimal qualitative difference between the upscaled cells and the modeled property. However, it is important to consider a quantitative analysis to complete the other. In brief, the petrophysical modeling of porosity and permeability are performed in a satisfactory manner, preserving almost the same distribution pattern from the upscaled well logs.
A precise description of the fractured network is significant to understand and predict the flow responses of the reservoir (Verscheure et al., 2012). To apply fracture modeling, DFN based on their intensity, orientation, length, transmissibility, and aperture, is done to the fine grid model (Correia et al., 2014). For DFN modeling, the intensity of fractures is inversely proportional to faults distance. Figures 17 and 18 represent the faults distance and intensity distribution, respectively, for the base case model.
A fractured network can be divided into several sets having their specific features. Figure 19 shows the DFN for two sets of fractures.  characterization. Rose diagram is a good method to represent the relative statistical prevalence of diverse directional trends, such as strike direction of fractures (Fig. 20). Histogram plot is another way to illustrate the relative prevalence of the trends (Fig. 21). Due to the high number of simulation runs generated during the reservoir management, it is necessary to perform an upscaling procedure to decrease computational efforts.   The cell scale of the simulation model must preserve representative simulation behavior. In this study, we assume that a grid cell size is 100 · 100 · 8 m. Porosity is upscaled using the arithmetic average weighted by NTG. Figure 22 represents the upscaled porosity map for the base case model. Flow-based upscaling method is applied for upscaling permeability to preserve Super-K flow features and generate three effective permeabilities in all directions (i, j and k). Figure 23 shows the result of the upscaling procedure of permeability in K i direction for the base case model. NTG is upscaled using the arithmetic averaging method (Fig. 24). In order to upscale the fracture properties, it is required to convert the DFN into equivalent effective properties for the simulation process (Bourbiaux et al., 1998). One of the fastest methods to generate these equivalent fracture properties is Oda's solution (Oda, 1985). The advantage of Oda's solution is that it can be calculated without any simulation process, resulting in fast processing time. Figures 25 and 26 represent the fracture spacing and fracture permeability as equivalent parameters after the upscaling procedure of DFN.
After upscaling process, it is required to check the consistency and quality of the base case results. One of    the ways to check the quality of results is with volume calculations. Volume-based calculations are used as part of our upscaling quality control to see if there are any differences between the values calculated on the fine grid and those calculated on the coarse grid. Any grid structure differences indicate issues with the definition and distribution of the      property values in the fine grid models, in the coarse grid models, or both. The parameters which should be calculated for fine and coarse models are bulk volume, net bulk volume and pore volume. The bulk volume is derived from the grid structure. It is the first parameter to check because any issues with it have a direct relation to the accuracy of assigned properties. All issues need to be checked and corrected, for example, boundaries and vertical intervals are      required to be the same in both grids. Due to corrections of specific issues with the grid structure, some issues with the net bulk volume indicate that upscaling the NTG has gone wrong. So, it is necessary to check the methods that were        applied to create the NTG. Checking the pore volume highlights any issues with the porosity and the same checks must be performed for NTG. Table 4 represents the values of bulk volume, net bulk volume and pore volume after applying cut off procedures for NTG. The results show that the fine and coarse models are consistent, and the base case model is suitable for uncertainty analysis and generating multiple images.
3. This part describes the uncertainties for static and dynamic attributes, during the initial phase of the field development. Table 5 represents the input uncertainty properties for reservoir simulation data, considering the static and dynamic uncertainties.
Flow units and petrophysical properties are generated using a random seed during the modeling procedure. Flow units are applied as rock types for reservoir simulation and are the background for the generation of petrophysical properties (porosity, permeability, and NTG). Flow unit 7 (Super-K) is a type of flow units which is separated as a different property, given its impact on fluid flow and its different modeling approach. Different level of water-relative permeability curves for Super-K and other flow units are generated, and are included in uncertainty models (Figs. 27 and 28). Discrete fractures are transformed into static properties (fracture spacing, fracture porosity, and permeability) after the upscaling process. PVT tables are added to show the uncertainty for oil density and gas in solution. Uncertainty in rock compressibility is identified during the initial development planning as well, and finally, different levels of absolute permeability in horizontal (KxKy) and vertical (Kz) axis, and Porosity Multipliers (POR) are considered as uncertainty attributes.
Tables 6 and 7 show uncertainty variables from geological attributes, and statistical values applied for geologic parameters, respectively. The mean value is used considering the information from well logs. The standard deviation for the mean value at an initial stage of field development must be the highest value to cover all possible scenarios for a later history matching process.
Considering the geologic uncertainties mentioned above, 200 images of petrophysical characteristics (matrix and fracture porosities, matrix and fracture permeabilities, fracture spacing, NTG, thickness ratio and rock type/flow unit)  are generated. Table 8 describes the uncertainty data and scenarios used to construct the simulation models. The probability of occurrence for each uncertainty level is equiprobable. The next step consists of combining the defined uncertainties, using DLHG method. The simulation models are generated in a process of reservoir studies in an initial field development under uncertainties, including 1.5 years of production data, and based on the well log information of three wells, (two Exploration Wells and one Wildcat). The exploration wells are applied only to geostatistical targets. In order to check the consistency of the created models with UNISIM-II-R, the Wildcat Well is applied under operational conditions with a minimum Bottom-Hole Pressure (BHP) of 250 kgf/cm 2 and a maximum of Surface Gas Production (SGP) of 50 000 kgf/cm 2 . Several objective functions are investigated based on the UNISIM-II-R model to validate the results. Figures 29-32 represent the BHP, oil rate (Qo), gas rate (Qg) and water rate (Qw) of Wildcat Well, respectively, for the history data and 200 models. These results represent the history data of UNISIM-II-R model for Wildcat Well during 1.5 years of production, and well-located among the models in the curves. Next, Tables 9 and 10 give the statistical parameters of OIP and WIP for the simulation models, respectively. Given the reservoir simulation results, the constructed models are instrumental and validated. We stress that such uncertainty analysis for flow unit types can be a valuable study for the later stages of field development and history matching procedures becoming, in turn, a matter that will be considered in a forthcoming paper. It is expected that the integration of HCA with geostatistical simulation and uncertainty analysis improves the accuracy of future reservoir models.

Conclusion
The main contribution of this work is to build an integrated workflow for a fractured reservoir model to generate an ensemble of simulation models based on flow unit concept which can span the uncertainty space. The workflow includes applying HCA method to identify flow units, geostatistical methods to extend the flow units and petrophysical properties within the inter-well area, and uncertainty analysis to integrate dynamic and static uncertainties for generating ensemble reservoir simulation models. The outcome from the HCA method, six flow units and non-reservoir zone (totally seven units) are identified using well log data. The abstained flow units are evaluated by the cophenetic coefficient, correlation coefficient, and variation coefficient. According to the cophenetic coefficients of   different distance and linkage functions, the distance function doesn't have any role on the performance of cluster analysis, and Ward linkage method is the most appropriate linkage compared with other applied linkage methods in this case study. High correlation coefficients between porosity and permeability data (averagely above 85%) also confirm this assertion. Furthermore, based on the variation coefficient, hierarchical clustering is a useful approach to simplify the characterization of reservoir heterogeneity into the homogeneous flow units. SIS algorithm with the vertical trend is then applied for all flow units except Super-K (high permeable zone). Super-K flow unit is randomly generated by the object modeling method because of its post-depositional genesis related to diagenetic events. SGS is done to perform the petrophysical modeling combining well data, anisotropy range and 3D flow unit model to control the porosity, permeability and NTG distribution. To apply fracture modeling, DFN is done to fine grid model, based on the intensity of fractures and faults distance. Given the number of grids in the fine grid models and long simulation time for each model, upscaling process is required to decrease in computational time and process. In order to check the consistency between fine and upscaled models, bulk volume, net bulk volume and pore volume for both models are compared. For generating an ensemble of 200 simulation models, dynamic and static uncertainties which are defined in uncertainty analysis process are combined using DLHG method. The results represent the importance of robust reservoir characterization and integration with reservoir simulation, especially in an initial stage of a field management plan where a small amount of data is available. As a validation of the methodology, a base production strategy is defined to check the reliability of the created models with a reference model working with a real reservoir with known results. The models based on the workflow reveal sufficient consistency with the reference model under operational conditions and these are useful for the subsequent stages of field development.