Regular Article
Evaluation of an uncertainty reduction methodology based on Iterative Sensitivity Analysis (ISA) applied to naturally fractured reservoirs
Department of Energy – Division of Petroleum Engineering, FEM/CEPETRO – University of Campinas – Unicamp, Campinas, São Paulo, Brazil
* Corresponding author: luisnagasaki@gmail.com
Received:
11
July
2018
Accepted:
5
March
2019
History matching for naturally fractured reservoirs is challenging because of the complexity of flow behavior in the fracture-matrix combination. Calibrating these models in a history-matching procedure normally requires integration with geostatistical techniques (Big Loop, where the history matching is integrated to reservoir modeling) for proper model characterization. In problems involving complex reservoir models, it is common to apply techniques such as sensitivity analysis to evaluate and identify most influential attributes to focus the efforts on what most impact the response. Conventional Sensitivity Analysis (CSA), in which a subset of attributes is fixed at a unique value, may over-reduce the search space so that it might not be properly explored. An alternative is an Iterative Sensitivity Analysis (ISA), in which CSA is applied multiple times throughout the iterations. ISA follows three main steps: (a) CSA identifies Group i of influential attributes (i = 1, 2, 3, …, n); (b) reduce uncertainty of Group i, with other attributes with fixed values; and (c) return to step (a) and repeat the process. Conducting CSA multiple times allows the identification of influential attributes hidden by the high uncertainty of the most influential attributes. In this work, we assess three methods: Method 1 – ISA, Method 2 – CSA, and Method 3 – without sensitivity analysis, i.e., varying all uncertain attributes (larger searching space). Results showed that the number of simulation runs for Method 1 dropped 24% compared to Method 3 and 12% to Method 2 to reach a similar matching quality of acceptable models. In other words, Method 1 reached a similar quality of results with fewer simulations. Therefore, ISA can perform as good as CSA demanding fewer simulations. All three methods identified the same five most influential attributes of the initial 18. Even with many uncertain attributes, only a small percentage is responsible for most of the variability of responses. Also, their identification is essential for efficient history matching. For the case presented in this work, few fracture attributes were responsible for most of the variability of the responses.
© L.A.N. Costa et al., published by IFP Energies nouvelles, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
History matching is important in reservoir studies, providing reliable models to forecast production and test strategies. History matching of naturally fractured reservoirs can be challenging, sometimes comprising features different from conventional reservoirs that influence flow behavior. A detailed analysis of several aspects on this type of reservoir was given by Bourbiaux (2010) and Lemonnier and Bourbiaux (2010a, b), describing the challenges on recovery methods, matrix-fracture transfer, simulator formulation, and history matching. For instance, fractures can act as a flow path (causing early water breakthrough), add heterogeneity, require other media to model fractures separately from the matrix, or might have to be modeled as discrete objects.
To deal with discrete objects, Jenni et al. (2007) applied a gradual deformation to modify fracture characteristics, such as size, quantity, and shape. Additionally, they applied the algorithm several times with different seeds to find several history-matched models. Similarly, Verscheure et al. (2012) applied a gradual deformation to modify positions of sub-seismic faults. Caers (2003) applied gradual deformation with training images to constrain new realizations. There are also methods based on gradients (Cui and Kelkar, 2005; Gang and Kelkar, 2008), on streamlines (Al-Harbi et al., 2004), on connectivity analysis (De Lima et al., 2009), among others (Lange, 2009; Al-Anazi and Babadagli, 2009; Suzuki et al., 2007; Ginting et al. 2011).
When fractures are modeled as discrete objects, additional attributes to match increase the search space and, consequently, the difficulty of finding good combinations of attributes. Search space reduction then becomes necessary. One traditional method is the sensitivity analysis, which identifies attributes with high impact on the model response (few attributes are responsible for most of the variation in the response).
Usually, once the most influential attributes are identified, others are fixed at their expected value during the history-matching process to reduce the search space. This approach is the Conventional Sensitivity Analysis (CSA). There are different methods to compute the sensitivity, for instance, variance-based and distance-based. Moreover, the sensitivity can be evaluated locally or globally. As an example, Fenwick et al. (2014) proposed a distance-based global sensitivity analysis, where they divide the response into classes and, if the frequency distribution of a parameter is the same in all classes, the response is insensitive to the parameter; otherwise the response is sensitive to the parameter. They showed a case study of a West Africa offshore oil reservoir. Ruffo et al. (2006) applied variance-based sensitivity analysis technique to perform risk analysis of oil and gas production in a reservoir considering different geological scenarios. Manceau and Rohmer (2016) applied global sensitivity analysis with metamodel in a CO2 storage case. Touzani and Busby (2014) showed a case where they evaluated 180 inputs, each one representing a homogeneous zone inside a layer of the reservoir. They evaluated their influence on the water cut.
For history matching of conventional reservoirs Almeida Netto et al. (2003) presented an application of sensitivity analysis in a history matching case. Maschio et al. (2009) applied sensitivity analysis with probability redistribution of discretized levels in the history matching problem. Similarly, Becerra et al. (2011) used sensitivity analysis in an approach to reduce the uncertainty in a history matching problem. Kassenov et al. (2014) showed an application of sensitivity analysis with design of experiments in the Tengiz field.
Some authors, such as Arastoopour and Chen (1991), Liu et al. (2003), Jafari and Babadagli (2008) and Yu et al. (2014) applied sensitivity analysis in fractured reservoirs to assess attributes influence. Regarding history matching problem, Tolstukhin et al. (2012) applied a sensitivity analysis to a portion of the Ekofisk field (North Sea, south of Norway) and identified the eight most important attributes for history matching, six being fracture related: fracture distribution, orientation, width, width-to-length ratio, permeability, and density. Costa et al. (2018) applied an Iterative Sensitivity Analysis (ISA) approach on an uncertainty reduction of global attributes of a complex naturally fractured reservoir.
This brief review shows that there are many works on the application of sensitivity analysis but few on the specific problem of history matching for naturally fractured reservoir. A review on global sensitivity analysis methods can be found in Iooss and Lemaître (2015).
One problem with CSA (fixing a subset of attributes at unique values) is over-reduction of the search space. An alternative is to apply sensitivity analysis iteratively where in each iteration only the most influential attribute varies while keeping others fixed. In other words, CSA is applied repeatedly. This approach is the ISA. An example of application is given by Costa et al. (2018), where they applied the ISA approach on an uncertainty reduction of global attributes of a complex naturally fractured reservoir. The high uncertainty of strongly influential attributes might disguise the influence of others (less influential). Additional sensitivity analysis can identify these attributes.
This paper compares three methods of uncertainty reduction for naturally fractured reservoirs. Method 1, applying ISA; Method 2, applying sensitivity analysis at the beginning to identify the most influential attributes (CSA); and Method 3, allowing the uncertainty level of all attributes to vary throughout the process of uncertainty reduction. The focus in this work is not exactly on how to compute the sensitivity but on how to apply the sensitivity analysis to improve the history matching process.
Results show that Method 1 reached good responses with significantly fewer simulations. In other words, ISA method performed better than CSA. The saved time and effort provided by Method 1 are particularly important in complex cases.
2 The methods
Figure 1 shows the general history-matching workflow, introduced by Avansi et al. (2016). The three methods follow this workflow, differing in Step 10 as detailed in Section 2.2.
![]() |
Fig. 1 General history-matching workflow (introduced by Avansi et al., 2016). |
As Figure 1 shows, the process begins by defining uncertain attributes (Step 1). The range of attributes is discretized into levels, and a discrete probability distribution represents each attribute.
Next, we define the tolerance for observed values (Step 2). In this step, we define the Acceptable Quadratic Distance (AQD) (Eq. (2)) and normalize Objective Functions (OFs) (Eq. (3)). When the quadratic distance of a model falls between −1 and 1, the deviation of the solutions (the deviation between the simulated and observed data) is of the same magnitude as the tolerance defined for the observed data. In some specific situations, however, models outside this range can be accepted. The tolerance remains fixed throughout the uncertainty reduction process.
Before beginning, we choose between history matching and uncertainty reduction (Step 3). Although similar, they have different objectives. Reducing the uncertainty characterizes the attributes to fit the observed data by modifying characteristics of the probability distribution. History matching finds the model(s) that best reproduce(s) past behavior, usually using an optimization method.
Once the process is selected, Step 4 generates n combinations of attributes (n models). Details on this step are in Section 2.1. We then simulate n models (Step 5).
Normalized Quadratic Distance (NQD) is used to evaluate the quality of the simulated models, as shown in equation (1):
(2)where Tol is the tolerance, given by a percentage of the observed data, and C
p
is a constant to avoid division by zero (when data has values close to zero, which can occur with water production rate).
As highlighted by Maschio and Schiozer (2016), there is no standard rule for choosing Tol and C p , which might depend on engineering judgment. The choice of either depends on the reliability of observed data and usually on the type of data. Maschio and Schiozer (2016) mention gas rates as an example, which are usually difficult to measure and so measurement errors tend to be high. Thus, the tolerance for this variable should be higher compared to others such as oil rates.
To facilitate the visualization and analysis of the results, we use Normalized Quadratic Distance with Sign (NQDS), which is defined in equation (3):
(3)where NQDS is the Normalized Quadratic Distance with Sign, QDS is the Quadratic Distance with Sign, and AQD is the Acceptable Quadratic Distance. The expression of QDS is:
(4)where N is the number of observed data, Sim is the simulated data, Hist is the observed data, and SD is the simple distance, computed by equation (5):
NQDS is used to analyze which models are within the range of acceptance or whether the uncertainty must be reduced (Step 7). The plot of the NQDS aids this analysis. Figure 2 gives an example of seven wells with NQDS plots. In this example, the range of acceptance is between the lines on −1 and 1.
![]() |
Fig. 2 Example of NQDS plots. |
In Figure 2, well 6 has the best match, with all models within the range of acceptance for NQDS. Wells 4 and 5 are examples where reducing uncertainty suffice. For wells 1, 2, 3, and 7, a review of the static model is necessary to include models with NQDS crossing the zero line.
The interval [−1, 1] for NQDS is the final objective. In some cases, though values outside this range might be accepted when it is not possible to obtain values within the range (due to high uncertainties or lack of data). Depending on the case and on the objective (or phase) of the study even values of 10 can be acceptable. Thus, this value is subjective and generally defined based on the authors’ practical experience with the problem.
If the uncertainty ranges are still unacceptable, the numerical model should be checked for inconsistencies (Step 8) in the initial stages. For instance, to correct well productivity or the well index. In this case, we make corrections (Step 9) and return to Step 4.
If the numerical model presents no problems, we apply the uncertainty-reduction method (Step 10). The three methods analyzed in this paper differ in this step, as we show in Section 2.2.
If the results are acceptable after reducing the uncertainty (Step 7), we increase the sample size to increase the likelihood of filtering more models (Step 11) and apply a filter (Step 12). As some models might still be outside the range of acceptance they are filtered out, leaving only those within the desired range.
Finally, in Step 13, filtered models can be used to forecast production under uncertainties.
2.1 Sampling and generation of models
The sampling and generation of models are performed to consider the big loop approach, where the static modeling is integrated into the history matching. Traditionally, the modification of the model in a history matching process is performed directly in the simulation model (small loop). Figure 3 illustrates these procedures.
![]() |
Fig. 3 Illustration of a history matching procedure in a big loop approach. |
As shown in Figure 3, in the small loop approach attributes are modified directly in the simulation model, whereas in the big loop approach attributes can be modified in the static model. As there are attributes from static and simulation model, in the big loop, the sampling method combines attributes from both models, for instance, p geostatistical attributes and q attributes of the simulation model, in total k = p + q.
The selected values of each of the p attributes are input to build the static model, using geological modeling software, such as Petrel of Schlumberger®. The reservoir image (permeability in the x, y, and z directions, fracture spacing in the x, y and z directions, matrix and fracture porosity, etc.) is generated through upscaling the static model. If we have n combination of attributes, we will have n images.
The selected values of each of the q attributes are input to build (or represent) properties such as relative permeability curve, capillary pressure curve, and PVT table. The combination of these properties with the generated image provides one simulation model. Thus, if we have n combination of k attributes, we will have n simulation models.
The n models are simulated using a flow simulator such as Imex of CMG Group and the matching quality is evaluated using NQDS index. If the quality is not acceptable (according to a pre-established criterion), attributes probabilities are updated, and the process repeats from the sampling.
In this work, we discretize the attribute range in levels and apply Discretized Latin Hypercube (DLHC) to combine them. This technique ensures that values over a whole range of a model attribute are selected. Figure 3 shows an example where the sampled level is the mean value of the probability distribution which is input to the geostatistical simulation.
For simulation, Imex, for example, uses fracture permeability and space in I, J and K directions. The matrix-fracture transfer function is calculated internally, and the user chooses which of the available approaches, such as the Gilman and Kazemi (1983) matrix-fracture transfer coefficient, σ, as follows:
(6)where, k
x
, k
y
and k
z
are the permeability in x, y and z directions, L
x
, L
y
and L
z
are fracture space in x, y and z directions, and Δ
x
·Δ
y
·Δ
z
is the total matrix volume.
The Discrete Fracture Network (DFN) is upscaled to provide these inputs. The software Petrel (Schlumberger) calculates internally these attributes for each grid cell. Considering a cell as a cube, it traces several perpendicular lines to each face and calculates distances between consecutive intersections and uses them to compute fracture spacing in each direction.
2.2 Global uncertainty reduction
In this section, we show the global uncertainty reduction methods used in this work. We first present the uncertainty reduction algorithm, the same for all three methods, and then present the methods. The global in this context refers to attributes with global influence, that is, influence the whole reservoir. We use the term to distinguish from local attributes, which influence only a region of the reservoir. It is not related to find global or local minimum (optimization) in this case.
2.2.1 Method of uncertainty reduction
The method of uncertainty reduction, based on Maschio and Schiozer (2016), starts by creating a correlation matrix to identify attributes influencing each OF as shown in Figure 4a; for instance, Attribute 1 influences objective functions OF1, 3, and 4, and attribute 4 does not significantly affect any OF.
![]() |
Fig. 4 Illustration of (a) correlation matrix, (b) generated histogram, and (c) the smoothed probability distribution in the nonparametric density estimation (Maschio and Schiozer, 2016). |
Next, we compute the Local Objective Function (LOF) for each influential attribute, using the equation:
(7)where, NIF is the number of influenced functions (for instance, for Attribute 1, NIF is 3 – OF1, 3, and 4). LOF is the arithmetic mean of influenced OFs (with R > R
c
). Note that for each attribute there are N values for LOF, where N is the number of models.
We then generate the new probability distribution of each influential attribute using the nonparametric kernel density estimation method. After computing the LOF for each influential attribute, we classify the N models in increasing order of LOF and select those lower than the defined threshold. As models with LOF below this threshold are used to build the histogram, a value to select good models should be used (for instance, NQD of five). An example of a frequency distribution is given in Figure 4b. The resulting histogram might contain discontinuities, undesirable in the initial stages of uncertainty reduction when the search space is still large, and the combinations may be too few to adequately cover this space. To avoid discontinuities, the nonparametric kernel density estimation generates a new probability distribution by smoothing out the histogram, eliminating discontinuities without prematurely excluding levels, as shown in Figure 4c. For more detail on the kernel density estimation see Appendix A.
Despite smoothing the histogram, the kernel method might be unable to restore eliminated levels in some situations. For example, when there are no discontinuities, but a high increase/decrease in the probability of levels. To illustrate this, consider the example of an attribute with five levels, as shown in Figure 5. The uncertainty reduction eliminates the first three levels, assigning high probabilities to levels 3 and 4 (as shown in Fig. 5, in red). In this case, the kernel method may be able to restore only levels 1 and 2. This is because, for more than two consecutive eliminated levels or highly skewed distributions, the kernel method cannot restore all eliminated levels. This indicates that the attribute should have levels with higher values, so we rescale this attribute, keeping the number of levels, moving them in the direction indicated (by the method); in this case, to the right.
![]() |
Fig. 5 Attribute re-scaling example. |
When generating the new distribution, we can define the number and width of the levels as well as the shape of the distribution. In this work, we maintain the number and width of the levels. We also use the form of the histogram (in red) to define the shape. Figure 5 shows the triangular distribution we used, indicated by the dashed line, resulting in the five levels indicated by solid line bars.
2.2.2 Method 1
Method 1 follows the steps shown in Figure 6.
![]() |
Fig. 6 Workflow of uncertainty reduction of Method 1. |
As shown in Figure 6, Method 1 starts with a sensitivity analysis to identify the attributes that most affect the OFs (Step 1). To measure this influence, we use the correlation matrix (m rows of uncertain attributes and n columns of OFs, or NQDS). Each cell of the matrix stores the Pearson correlation coefficient (R) between an attribute and an OF. The attribute is influential if at least one value of R is greater than R c (threshold). Influential attributes in this step comprise Group 1. (In this work the concern is to find influential attributes to form a group, thus, there is no need to rank the attributes).
Next, we simulate models by varying attributes of Group 1 and fixing the mean values of others (Step 2). This step also checks whether the variations of only the most influential attributes keep the overall response variability.
Before reducing uncertainty, we check the symmetry of NQDS indexes (Step 3). As some asymmetry is likely, it should be within the range of acceptance. The presence of models with positive and negative NQDS is sufficient to reduce uncertainty, as the asymmetry can be further treated by a regional/local approach. The symmetry guarantees that models are neither positively nor negatively biased. If they are biased, reducing uncertainty will not lead to matched models (Fig. 2, wells 1 and 2).
If necessary, we rescale attributes (Step 4) according to influence. The most influential attribute in a group is rescaled first and, if necessary (if asymmetry remains unimproved), the second most influential attribute is rescaled.
With the asymmetry fixed, we then reduce uncertainty. This method, based on the work of Maschio and Schiozer (2016), identifies the probability distribution of influential attributes (Sect. 2.2.1). After reducing uncertainty, we maintain the distributions for these attributes.
Next, we apply the sensitivity analysis (Step 6), allowing non-influential attributes to vary again while maintaining the distribution of influential attributes (with already reduced uncertainty), where non-influential attributes can vary again according to their original probability distribution, for instance, uniform distribution. After reducing the uncertainty of the attributes strongly affecting the OFs, the influence of other attributes may become significant at this step (the strong influence of one attribute may hide the influence of another).
If Step 6 produces a new group of influential attributes, Step 5 is repeated, otherwise, the global uncertainty reduction is complete. An illustration of the uncertainty reduction process can be found in Appendix B.
2.2.3 Method 2
Method 2 follows the steps shown in Figure 7.
![]() |
Fig. 7 Workflow of uncertainty reduction of Method 2. |
As Method 2 shows in Figure 7, the sensitivity analysis is applied once, at the beginning, to identify the influential attributes. With a smaller value of R c (compared to the value used in Method 1), we identify one group of influential attributes and reduce the uncertainty for this group.
2.2.4 Method 3
Method 3 follows the steps shown in Figure 8.
![]() |
Fig. 8 Workflow of uncertainty reduction of Method 3. |
As shown in Figure 8, there is no sensitivity analysis in this method and we consider all attributes in the uncertainty reduction step.
3 Application
3.1 Case description
The methodologies are tested on a complex carbonate field based on a real case. The synthetic reservoir, called reference model (FR-R), is the “real” reservoir, with unknown characteristics. We extract information from FR-R and build a refined model (FR-SF). Finally, we upscale FR-SF and generate the coarse model (FR-SC), used for flow simulation.
3.2 Reference model: FR-R
The FR-R was built from 25 synthetic well logs of matrix porosity and permeability, facies distribution, and fracture intensity. The model has a corner point grid of 35 × 56 × 30 blocks (58 500) measuring, on average, 100 × 100 × 10 m. Fractures are four-sided and have average length, aperture, and orientation of 300 m, 0.4 mm, and 255°, respectively. We simulated fracture properties to create the DFN, further converted to an equivalent grid property.
To generate the production history, we applied a production strategy comprising 17 vertical wells; 12 producers completed in the four top layers and five injectors completed in the five bottom layers of the reservoir. Note that these wells are different from those used to build FR-R. We created a secondary recovery case as determining the best recovery strategy is out of the scope of this work. We created a viable strategy based on a five-spot production pattern, with plausible flow rates and recovery factors for naturally fractured reservoirs to test the methods.
Figure 9 shows the matrix porosity map with the production strategy. The simulation of this strategy in the FR-R model generated 8 years of production history. After generating history data, we do not use FR-R model, as it is the “real” reservoir for which the characteristics are unknown.
![]() |
Fig. 9 Matrix porosity map and the production strategy. PROD represents producers and INJ represents injectors. |
3.3 Refined simulation model: FR-SR
For the refined model, we used the 17 well logs extracted from FR-R (from the production strategy we defined), for matrix porosity and permeability, facies distribution, and fracture intensity. This model has the same resolution as the FR-R, a grid with 35 × 56 × 30 blocks (58 500), measuring 100 × 100 × 10 m.
The difference between FR-R and FR-SF is that FR-SF was constructed using well log information from the 17 wells extracted from the reference (“real”) reservoir FR-R. The FR-SF is constructed from properties sampled at the 17 wells and uses different attribute values as an input to generate multiple models. These models are further upscaled for simulation.
3.4 Coarse simulation model: FR-SC
For flow simulation, we upscaled the refined model (FR-SF) and created the coarse simulation model (FR-SC). In this work, we upscaled in the vertical direction, grouping every two blocks to halve the number of blocks in the vertical direction.
The FR-SC model is a corner point grid with 35 × 56 × 15 blocks (29 400 in total), measuring 100 × 100 × 20 m. As it is typical of a Type II naturally fractured reservoir (the essential reservoir permeability comes mainly from fractures – Nelson, 2001), we applied the dual porosity model to separate matrix and fracture grids. The main characteristics of this model are:
-
Reference pressure of 250 kgf/cm2 and a bubble point pressure of 201.5 kgf/cm2;
-
Depth of 2350 m and water–oil contact at 2530 m;
-
Oil API of 38°;
-
Two rock types. Rock type 1 with a water-wet trend and rock type 2 with an intermediate wettability trend.
-
Production liquid rates and water injection rates constrain simulation over the history period.
Fractures have a straight-line relative permeability, which is frequently used as a good approximation (Mazo and Schiozer, 2013; Correia et al., 2016), and zero capillary pressure, which is a common practice when simulating naturally fractured reservoirs (Correia et al., 2016).
3.5 Uncertain attributes
This work considered 18 uncertain attributes, as shown in Table 1. These uncertain attributes have a potential impact on the reservoir performance.
Uncertain attributes, discretized into five levels.
Of the 18 uncertain attributes in Table 1, 1–14 are from the geostatistical model and 15–18, from the simulation model.
-
Variogram in x and y directions: These attributes influence the porosity map of the static model.
-
Attributes 3–7 are the proportion of each facies present in the reservoir. They are further grouped by specific rock type in the simulation model.
-
Correlation coefficient of porosity and permeability: this attribute varies the correlation between permeability and porosity of the static model. Thus, it impacts the permeability map.
-
Correlation coefficient of fracture intensity and the distance from a fault: this attribute quantifies changes in fracture intensity as the distance to the fault increases. It affects the fracture intensity map.
-
Variogram in x and y directions of fracture intensity: these attributes also influence the fracture intensity map. They are the parameters of the variogram of fracture intensity map.
-
Fracture length, orientation, and aperture: they are fracture characteristics. They define the size (length and aperture) and direction of fractures (orientation).
The remaining attributes are the parameters of the expression of capillary pressure and relative permeability of the simulation model. The five facies of the static model are grouped into two rock types in the simulation model:
-
Coefficient of relative permeability of rock type 1: changes the water relative permeability curve of rock type 1;
-
Coefficient of capillary pressure of rock type 1: changes the capillary pressure curve of rock type 1;
-
Coefficient of relative permeability of rock type 2: changes the water relative permeability curve of rock type 2;
-
Coefficient of capillary pressure of rock type 2: changes the capillary pressure curve of rock type 2.
The discretization of attributes allows working in a discrete domain and evaluates, for example, categorical variables adding flexibility and, according to Maschio and Schiozer (2016), also allows working with coded variables, facilitating the definition of the bandwidth used for kernel estimation. There is no rule for the number of levels and it depends on the case. In this work, we divided the initial range of each attribute in uniform intervals and considered the center point of each interval as the corresponding level’s value. If the range is small and/or there are many levels, different level might result in similar response. On the other hand, if the response is sensitive to the attribute, many levels might result in significantly different responses. Thus, the number of levels must be defined from a prior knowledge of the attribute or by testing some number of levels to choose the best one.
3.6 Configuration of the uncertainty-reduction process
To match the observed reservoir behavior, we considered 56 Objective Functions (OFs): oil production rates and bottom-hole pressures for 12 producers, water injection rates and bottom-hole pressures for five injectors, water production rates for all wells but PROD5 and PROD9, and water breakthrough time for 12 wells. As water rates of wells PROD5 and PROD9 are very low, they were not considered as OFs. Instead, water breakthrough was included. We measured the quality of the match using NQDS (Eq. (3)).
To calculate the AQD given in equation (2), we used a tolerance of 0.05 (5%) for injected water rates and produced liquid rates (informed rates); a tolerance of 0.1 (10%) for other OF; and C p of 10 m3/day for the water production rate of PROD8 (which has low production rates).
A common concern regarding the iterative uncertainty reduction process is the number of simulations per iteration. Schiozer et al. (2015) suggested 100–300 to be a good number to quantify risk in their study. For this case, we ran tests with 300, 500 and 700 and concluded that 500 simulations provide an acceptable range of variability of NQDS.
We consider an attribute to be influential when at least one value of R > R c , i.e., at least one OF must be influenced. The R c value can be changed depending on the quality and amount of data and required accuracy. This work uses the value 0.3 based on the work of Maschio and Schiozer (2016). They suggested a value between 0.1 and 0.4 after comparing results applying different values of R c . For each selected attribute, we classify models according to the threshold value of LOF; we set the value at five. There may only be few models with LOF under five, which is insufficient to construct the new histogram for an attribute. In these cases, we set a minimum of 60% of the models to generate the new histogram of an attribute.
4 Results and discussions
In this section, we compare the results from the three methods regarding the matching quality and the number of simulations.
4.1 Most influential attributes
When reducing uncertainty, different methods may identify and work with different influential attributes.
Method 1 identified two groups, Group 1 comprising fracture aperture (Abf), orientation (Orf), and the coefficient of capillary pressure of rock type 1 (Pcdf1), and Group 2 comprising correlation coefficient of fracture intensity and distance to the fault (Ckf) and fracture length (Cf). Thus, Method 1 reduced uncertainty of five attributes.
Method 2 identified the same five attributes, all in the same group.
Method 3 identified these five attributes through the iterations, with the difference that, in the final iteration, the variogram of fracture intensity on the x-direction was noted as influential. However, it influenced only the bottom-hole pressure of injector 2, which was already well matched and the resulted histogram of the attribute was almost unchanged.
Therefore, the three methods identified the same five attributes as most influential, showing the process to be very consistent.
4.2 Rescaling of fracture aperture
For the three methods, the histogram of fracture aperture was the same after two iterations, as shown in Figure 10 (bars of levels 2, 3, and 4). Thus, we applied the same rescaling in the three methods, using a triangular distribution, as shown in Figure 10 (bars of levels 0A–4A).
![]() |
Fig. 10 Fracture aperture rescaling used in the three methods. |
As levels 0 and 1 had zero probability (Fig. 10), we added two levels to the right, resulting in a triangular shape. Table 2 shows the new levels.
New levels of fracture aperture after rescaling.
In Table 2, the capital letter “A” is used to distinguish the new levels from the previous levels (lower case). The new distribution centers on level 4 before rescaling, with 36% probability.
4.3 NQDS variability
Figure 11 plots the NQDS values for (a) bottom-hole pressure and (b) oil production rate for each model, for the three methods, after reducing the uncertainty.
![]() |
Fig. 11 NQDS dispersion for (a) bottom-hole pressure of producers and (b) oil production rate. |
Figure 11a and b shows the significantly reduced variability for all producers, for all three methods in both bottom-hole pressure and oil production rate. Moreover, the variability is similar. The same observation is valid for other objective functions, with little difference between methods. As we are comparing the uncertainty reduction process on a global scale, we can consider the variability in the three methods practically the same, without significant difference.
When the difference between plots is small, as for the case shown in Figure 11, the boxplot is better suited to compare the methods, as it shows more details, such as the 25th and 75th percentiles and mean and median values. Figure 12 shows the boxplot of NQDS for (a) bottom-hole pressure of producers 7, 10, 11, and 12; (b) for water rate of wells 7, 8, 11, and 12 for the initial run (first), Method 1 (second), Method 2 (third), and Method 3 (fourth).
![]() |
Fig. 12 Boxplot of NQDS for (a) bottom-hole pressure of producers 7, 10, 11, and 12 and (b) for water rate of wells 7, 8, 11, and 12 for the initial run (first), Method 1 (second), Method 2 (third), and Method 3 (fourth). |
Figure 12a and b shows a significant reduction in the variability for both bottom-hole pressures and water rates, with no significant difference in the boxplots between the methods for these OFs. The same conclusion extends to other OFs; the differences in the boxplot were insufficient to indicate one method over another.
4.4 Models within different ranges of NQDS
Another assessment is to compare the percentage of models, for each method, that is within a defined NQDS value. For instance, we can assess the percentage of models with all twelve producers with |NQDS| lower than five, for the bottom-hole pressure. Figure 13 shows the graph for bottom-hole pressure of producer wells and oil rates.
![]() |
Fig. 13 Models with all producers with |NQDS| less than the cut-off value for (a) bottom-hole pressure and (b) for oil rate. |
Figure 13 shows that the results for bottom-hole pressure were almost identical. The oil rates, shown in Figure 13b, despite some differences, were also very similar for all the results for the three methods. For example, the percentage of models with |NQDS| less than four is around 20% for all three methods.
Figure 14 presents the graphs for water production rates and water breakthrough.
![]() |
Fig. 14 Models with all producers with |NQDS| lower than the cut-off value for (a) water breakthrough and (b) for water rate. |
Figure 14a shows similar results for water breakthrough. The water production rates present some small differences in Figure 14b. However, as the cut-off value decreases, the difference also decreases, as we can see comparing the percentage of models for the |NQDS| cut-off value of 30.
4.5 Histogram analysis
We also compared the histograms of influential attributes from each method. Figure 15 shows the relative frequency after uncertainty reduction for the five most influential attributes for Methods 1, 2, and 3. They are fracture aperture, orientation and length, coefficient of capillary pressure of rock type 1, and the correlation coefficient of fracture intensity and distance to the fault. For fracture aperture, levels have the suffix “A” to indicate that they were rescaled.
![]() |
Fig. 15 Relative frequency after uncertainty reduction of most influential attributes for (a) Method 1, (b) Method 2, and (c) Method 3. |
Figure 15 shows similar histograms for most attributes after uncertainty reduction. The correlation coefficient of fracture intensity and the distance to the fault (Ckf), for instance, has the same histogram in all methods. Fracture aperture and coefficient of capillary pressure of rock type 1 have the same most probable levels, 1A and 2A for aperture and for coefficient of capillary pressure. However, as history matching commonly has multiple answers, small differences are expected, as observed for the fracture orientation.
Despite some differences, we also conclude that the histograms presented similarities for all methods.
4.6 Number of simulations
Finally, we compared the number of simulations required by each method to reduce the uncertainty. Table 3 shows the number of simulations in each iteration for each method.
Number of simulations used in each method.
As Table 3 shows, Method 1 demanded 2657 simulations to reduce the uncertainty of the five most influential attributes. That is a drop in the number of simulations of 24% and 12% when compared to Methods 3 and 2, respectively. The advantage of Method 1 is that the uncertainty reduction of the first group demands fewer simulations, as there are far fewer attributes to vary. In this case, there were only three attributes, and only 292 simulations were run (83, 68, 70, and 71 simulations).
As we have shown in Sections 4.1 through 4.5, the match quality between the three methods presented insignificant differences. Thus, Method 1 (using ISA) performed as well as Method 3 (varying all attributes) while requiring nearly 900 fewer simulations. In complex cases, demanding intense computational effort, this difference might be significant.
Compared to CSA, ISA also found the influential attributes and achieved similar matching quality. This proves that the application of sensitivity analysis iteratively is a good option. Moreover, the proposed sensitivity analysis technique is applied in a big loop context, which integrates the static modeling. In this case, the application of some types of sensitivity analysis, such as those based on metamodels is difficult, as the process involves geostatistical and flow simulation and finding accurate proxy models would be challenging.
In this work, we used a case with 18 uncertain attributes, which is less than the number used by Touzani and Busby (2014) who worked with 180 inputs (each one representing a homogeneous zone inside a layer of the reservoir). However, the attributes evaluated in the present work have complex relationship with the reservoir response, for example, the mean value of the aperture distribution.
5 Conclusion
In this work, we assessed the use of ISA over CSA through the comparison of three uncertainty reduction methods applied to a complex naturally fractured reservoir model. The three methods showed similar matching quality when comparing NQDS variability, the number of models within acceptable ranges, and histograms of influential attributes.
They also identified the same five most influential attributes out of the initial 18. It shows that a small percentage is responsible for most of the variability in responses. Therefore, Method 1 focuses the effort on the most important attributes. For the case presented in this work, a type II fractured reservoir, a few fracture attributes were responsible for most of the variability of the responses.
Comparing the number of simulations, however, showed the advantage of reducing the uncertainty by groups. Reducing the uncertainty of a smaller group of influential attributes, i.e., reducing the uncertainty of one group, then another and so on, resulted in a 12% decrease in simulations when compared with reducing the uncertainty of all most influential attributes grouped together (CSA). Compared with the method without sensitivity analysis, the number of simulations was reduced by 24%. Therefore, reducing uncertainty by groups proved to be a better option than CSA approach in this case. It was successfully applied in a complex naturally fractured reservoir in a big loop approach what proves it is suitable for screening parameters in practical reservoir cases.
Acknowledgments
This work was conducted with the support of Energi Simulation and Petrobras within the ANP R&D tax as “commitment to research and development investments”. The authors are grateful for the support of the Center of Petroleum Studies (CEPETRO-UNICAMP/Brazil), the Department of Energy (DE-FEM-UNICAMP/Brazil) and Research Group in Reservoir Simulation and Management (UNISIM-UNICAMP/Brazil). In addition, special thanks to CMG and Schlumberger for software licenses.
Appendix A
Nonparametric density estimation
Nonparametric density functions are those that do not require parameters to characterize, such as mean and standard deviation for a normal density function (parametric). An expression to determine the nonparametric density function is given in equation (A1) (Faucher et al., 2001):
(A1)where n is the sample size (x
1
, x
2
, …, x
n
), h is a bandwidth that determines the degree of smoothing and K is a Probability Distribution Function (PDF) of a positive kernel (K(u) ≥ 0). This PDF must satisfy (Eq. (A2)):
The estimator F(x) counts the percentage of observations that are close to point x and depends on the number of observations x i close to it; the fewer the number of observations near x i the smaller F(x) is and vice-versa. According to the authors, the shape of the distribution is determined directly from the data and depends on the bandwidth. A discussion on the selection of this parameter is given by Altman and Léger (1995). For practical purposes, commercial softwares, such as Matlab (Matworks®) have an algorithm which determines the best bandwidth value automatically.
Appendix B
Illustration of the uncertainty reduction process
Figure B1 shows an example of the uncertainty reduction of two influential attribute groups.
![]() |
Fig. B1 Uncertainty reduction for two attribute groups. |
Figure B1 shows that:
-
Sensitivity analysis (considering all attributes with uniform discrete distribution) identified aperture and orientation to be influential. These are grouped into Group 1. Other attributes have fixed mean values.
-
Uncertainty reduction for Group 1 resulted in the final distribution for aperture and orientation. We keep these distributions constant from now on.
-
Allowing other attributes to vary again with a uniform discrete distribution, we identified a second group of influential attributes: matrix permeability, and porosity. Now, Group 1 has a fixed distribution (from step ii), Group 2 has a uniform discrete distribution, and the others have a fixed mean value.
-
The uncertainty reduction for Group 2 resulted in the final distribution for matrix permeability and porosity. Thus, we find the distribution for Group 1 in step ii and the distribution for Group 2 in step iii. Other attributes have a uniform discrete distribution.
References
- Al-Anazi A., Babadagli T. (2009) Automatic fracture density update using smart data and artificial neural networks, Comput. Geosci. 36, 3, 335–347. doi: 10.1016/j.cageo.2009.08.005. [CrossRef] [Google Scholar]
- Almeida Netto S.L., Schiozer D.J., Ligero E.L., Maschio C. (2003, January) History matching using uncertainty analysis, Canadian International Petroleum Conference, Petroleum Society of Canada. doi: 10.2118/2003-145. [Google Scholar]
- Al-Harbi M., Cheng H., He Z., Datta-Gupta A. (2004, September) Streamline-based production data integration in naturally fractured reservoirs, SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers. doi: 10.2118/89914-PA. [Google Scholar]
- Altman N., Léger C. (1995) Bandwidth selection for kernel distribution function estimation, J. Stat. Plann. Inference 46, 2, 195–214. doi: 10.1016/0378-3758(94)00102-2. [CrossRef] [Google Scholar]
- Arastoopour H., Chen S.T. (1991, January) Sensitivity analysis of key reservoir parameters in gas reservoirs, SPE Gas Technology Symposium, Society of Petroleum Engineers. doi: 10.2118/21515-MS. [Google Scholar]
- Avansi G.D., Maschio C., Schiozer D.J. (2016) Simultaneous history matching approach using reservoir-characterization and reservoir-simulation studies, SPE Reserv. Eval. Eng. 19, 694–712. doi: 10.2118/179740-PA. [CrossRef] [Google Scholar]
- Becerra G.G., Maschio C., Schiozer D.J. (2011) Petroleum reservoir uncertainty mitigation through the integration with production history matching, J. Braz. Soc. Mech. Sci. Eng. XXXIII, 2, 147–158. doi: 10.1590/S1678-58782011000200005. [CrossRef] [Google Scholar]
- Bourbiaux B. (2010) Fractured reservoir simulation: A challenging and rewarding issue, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 65, 2, 227–238. doi: 10.2516/ogst/2009063. [CrossRef] [Google Scholar]
- Caers J. (2003) Efficient gradual deformation using streamline-based proxy method, J. Petrol. Sci. Eng. 39, 57–83. doi: 10.1016/S0920-4105(03)00040-8. [CrossRef] [Google Scholar]
- Correia M.G., Maschio C., von Hohendorff Filho J.C., Schiozer D.J. (2016) The impact of time-dependent matrix-fracture fluid transfer in upscaling match procedures, J. Petrol. Sci. Eng. 146, 752–763. doi: 10.1016/j.petrol.2016.07.039. [CrossRef] [Google Scholar]
- Costa L.A.N., Maschio C., Schiozer D.J. (2018) A new methodology to reduce uncertainty of global attributes in naturally fractured reservoirs, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 73, 41, 1–23. doi: 10.2516/ogst/2018038. [Google Scholar]
- Cui H., Kelkar M. (2005, April) History matching of naturally fractured reservoir and a case study, SPE Western Regional Meeting, Society of Petroleum Engineers. doi: 10.2118/94037-MS. [Google Scholar]
- De Lima A., Lange A., Schiozer D.J. (2009, June) Assisted history matching for the characterization and recovery optimization of fractured reservoirs using connectivity analysis, EAGE Annual Conference & Exhibition incorporating SPE Europec, Society of Petroleum Engineers. doi: 10.2118/154392-MS. [Google Scholar]
- Faucher D., Rasmussen P.F., Bobée B. (2001) A distribution function based bandwidth selection method for kernel quantile estimation, J. Hydrol. 250, 1–4, 1–11. doi: 10.1016/S0022-1694(01)00359-6. [CrossRef] [Google Scholar]
- Fenwick D.H., Scheidt C., Caers J. (2014) Quantifying asymmetric parameter interaction in sensitivity analysis: application to reservoir modeling, Math. Geosci. 46, 4, 493–511. doi: 10.1007/s11004-014-9530-5. [CrossRef] [Google Scholar]
- Gang T., Kelkar M. (2008) History matching for determination of fracture permeability and capillary pressure, SPE Reserv. Eval. Eng. 11, 5, 813–822. doi: 10.2118/101052-PA. [CrossRef] [Google Scholar]
- Gilman J.R., Kazemi H. (1983) Improvements in simulation of naturally fractured reservoirs, Soc. Pet. Eng. J. 23, 4, 695–707. [CrossRef] [Google Scholar]
- Ginting V., Pereira F., Presho M., Wo S. (2011) Application of the two-stage Markov chain Monte Carlo method for characterization of fractured reservoirs using a surrogate flow model, Comput. Geosci. 15, 4, 691–707. doi: 10.1007/s10596-011-9236-4. [CrossRef] [Google Scholar]
- Iooss B, Lemaître P (2015) Uncertainty management in simulation-optimization of complex systems, in: Dellino G., Meloni C. (eds), Operations Research/Computer Science Interfaces Series, Vol. 59, Springer, Boston, MA. doi: 10.1007/978-1-4899-7547-8_5. [Google Scholar]
- Jafari A., Babadagli T. (2008) A sensitivity analysis for effective parameters on fracture network permeability, SPE Western Regional and Pacific Section AAPG Joint Meeting, Society of Petroleum Engineers. doi: 10.2118/113618-MS. [Google Scholar]
- Jenni S., Hu L.Y., Basquet R., de Marsily G., Bourbiaux B. (2007) History matching of a stochastic model of field-scale fractures: Methodology and case study, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 62, 2, 265–276. doi: 10.2516/ogst: 2007022. [CrossRef] [Google Scholar]
- Kassenov B., King G.R., Chaudhri M., Abdrakhmanova A., Jenkins S., Bateman P., Iskakov E. (2014, November) Efficient workflow for assisted history matching and Brownfield design of experiments for the Tengiz field, SPE Annual Caspian Technical Conference and Exhibition, Society of Petroleum Engineers. doi: 10.2118/172329-MS. [Google Scholar]
- Lange A.G. (2009) Assisted history matching for the characterization of fractured reservoirs, AAPG Bull. 93, 11, 1609–1619. doi: 10.1306/08040909050. [CrossRef] [Google Scholar]
- Lemonnier P., Bourbiaux B. (2010a) Simulation of naturally fractured reservoirs. State of the art – Part 1 – Physical mechanisms and simulator formulation, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 65, 2, 239–262. doi: 10.2516/ogst/2009066. [CrossRef] [Google Scholar]
- Lemonnier P., Bourbiaux B. (2010b) Simulation of naturally fractured reservoirs. State of the art – Part 2 – Matrix-fracture transfers and typical features of numerical studies, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 65, 2, 263–286. doi: 10.2516/ogst/2009067. [CrossRef] [Google Scholar]
- Liu J., Bodvarsson G.S., Wu Y.S. (2003) Analysis of flow behavior in fractured lithophysal reservoirs, J. Contam. Hydrol. 62, 189–211. doi: 10.1016/S0169-7722(02)00169-9. [CrossRef] [PubMed] [Google Scholar]
- Manceau J.C., Rohmer J. (2016) Post-injection trapping of mobile CO2 in deep aquifers: Assessing the importance of model and parameter uncertainties, Comput. Geosci. 20, 6, 1251–1267. doi: 10.1007/s10596-016-9588-x. [CrossRef] [Google Scholar]
- Maschio C., Schiozer D.J., Moura Filho M.A.B., Becerra G.G. (2009) A methodology to reduce uncertainty constrained to observed data, SPE Reserv. Eval. Eng. 12, 1, 167–180. doi: 10.2118/111030-PA. [CrossRef] [Google Scholar]
- Maschio C., Schiozer D.J. (2016) Probabilistic history matching using discrete Latin Hypercube sampling and nonparametric density estimation, J. Petrol. Sci. Eng. 147, 98–115. doi: 10.1016/j.petrol.2016.05.011. [CrossRef] [Google Scholar]
- Mazo E.M., Schiozer D.J., (2013, June) Modeling fracture relative permeability-what is the best option?, 75th EAGE Conference & Exhibition incorporating SPE EUROPEC 2013, Society of Petroleum Engineers. doi: 10.3997/2214-4609.20130867. [Google Scholar]
- Nelson R. (2001) Geologic analysis of naturally fractured reservoirs, Gulf Professional Publishing, TX, USA. [Google Scholar]
- Ruffo P., Bazzana L., Consonni A., Corradi A., Saltelli A., Tarantola S. (2006) Hydrocarbon exploration risk evaluation through uncertainty and sensitivity analyses techniques, Reliab. Eng. Syst. Safety 91, 10–11, 1155–1162. doi: 10.1016/j.ress.2005.11.056. [CrossRef] [Google Scholar]
- Schiozer D.J., Avansi G.D., dos Santos A.A.S. (2015) Risk quantification combining geostatistical realizations and discretized Latin Hypercube, J. Braz. Soc. Mech. Sci. Eng. 1–13, doi: 10.1007/s40430-016-0576-9. [Google Scholar]
- Suzuki S., Daly C., Caers J., Mueller D. (2007) History matching of naturally fractured reservoirs using elastic stress simulation and probability perturbation method, SPE J. 12, 1, 118–129. doi: 10.2118/95498-PA. [CrossRef] [Google Scholar]
- Tolstukhin E., Lyngnes B., Sudan H.H. (2012, June) Ekofisk 4D seismic history matching workflow, EAGE Annual Conference & Exhibition incorporating EUROPEC, Society of Petroleum Engineers. doi: 10.2118/154347-MS. [Google Scholar]
- Touzani S., Busby D. (2014) Screening method using the derivative-based global sensitivity indices with application to reservoir simulator, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 69, 4, 619–632. doi: 10.2516/ogst/2013195. [Google Scholar]
- Verscheure M., Fourno A., Chilès J.P. (2012) Joint inversion of fracture model properties for CO2 storage monitoring or oil recovery history matching, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 67, 2, 221–235. doi: 10.2516/ogst/2011176. [CrossRef] [Google Scholar]
- Yu W., Luo Z., Javadpour F., Varavei A., Sepehrnoori K. (2014) Sensitivity analysis of hydraulic fracture geometry in shale gas reservoirs, J. Petrol. Sci. Eng. 113, 1–7. doi: 10.1016/j.petrol.2013.12.005. [CrossRef] [Google Scholar]
All Tables
All Figures
![]() |
Fig. 1 General history-matching workflow (introduced by Avansi et al., 2016). |
In the text |
![]() |
Fig. 2 Example of NQDS plots. |
In the text |
![]() |
Fig. 3 Illustration of a history matching procedure in a big loop approach. |
In the text |
![]() |
Fig. 4 Illustration of (a) correlation matrix, (b) generated histogram, and (c) the smoothed probability distribution in the nonparametric density estimation (Maschio and Schiozer, 2016). |
In the text |
![]() |
Fig. 5 Attribute re-scaling example. |
In the text |
![]() |
Fig. 6 Workflow of uncertainty reduction of Method 1. |
In the text |
![]() |
Fig. 7 Workflow of uncertainty reduction of Method 2. |
In the text |
![]() |
Fig. 8 Workflow of uncertainty reduction of Method 3. |
In the text |
![]() |
Fig. 9 Matrix porosity map and the production strategy. PROD represents producers and INJ represents injectors. |
In the text |
![]() |
Fig. 10 Fracture aperture rescaling used in the three methods. |
In the text |
![]() |
Fig. 11 NQDS dispersion for (a) bottom-hole pressure of producers and (b) oil production rate. |
In the text |
![]() |
Fig. 12 Boxplot of NQDS for (a) bottom-hole pressure of producers 7, 10, 11, and 12 and (b) for water rate of wells 7, 8, 11, and 12 for the initial run (first), Method 1 (second), Method 2 (third), and Method 3 (fourth). |
In the text |
![]() |
Fig. 13 Models with all producers with |NQDS| less than the cut-off value for (a) bottom-hole pressure and (b) for oil rate. |
In the text |
![]() |
Fig. 14 Models with all producers with |NQDS| lower than the cut-off value for (a) water breakthrough and (b) for water rate. |
In the text |
![]() |
Fig. 15 Relative frequency after uncertainty reduction of most influential attributes for (a) Method 1, (b) Method 2, and (c) Method 3. |
In the text |
![]() |
Fig. B1 Uncertainty reduction for two attribute groups. |
In the text |