Regular Article
A new methodology to reduce uncertainty of global attributes in naturally fractured reservoirs
Department of Energy (FEM), CEPETRO University of Campinas (Unicamp), Campinas, São Paulo, Brazil
* Corresponding author: luisnagasaki@gmail.com
Received:
28
February
2018
Accepted:
13
July
2018
Accurately characterizing fractures is complex. Several studies have proposed reducing uncertainty by incorporating fracture characterization into simulations, using a probabilistic approach, to maintain the geological consistency, of a range of models instead of a single matched model. We propose a new methodology, based on one of the steps of a general history-matching workflow, to reduce uncertainty of reservoir attributes in naturally fractured reservoirs. This methodology maintains geological consistency and can treat many reservoir attributes. To guarantee geological consistency, the geostatistical attributes (e.g., fracture aperture, length, and orientation) are used as parameters in the history matching. This allows us to control Discrete Fracture Network attributes, and systematically modify fractures. The iterative sensitivity analysis allows the inclusion of many (30 or more) uncertain attributes that might occur in a practical case. At each uncertainty reduction step, we use a sensitivity analysis to identify the most influential attributes to treat in each step. Working from the general history-matching workflow of Avansi et al. (2016), we adapted steps for use with our methodology, integrating the history matching with geostatistical modeling of fractures and other properties in a big loop approach. We applied our methodology to a synthetic case study of a naturally fractured reservoir, based on a real semi-synthetic carbonate field, offshore Brazil, to demonstrate the applicability in practical and complex cases. From the initial 18 uncertain attributes, we worked with only 5 and reduced the overall variability of the Objective Functions. Although the focus is on naturally fractured reservoirs, the proposed methodology can be applied to any type of reservoir.
© L.A.N. Costa et al., published by IFP Energies nouvelles, 2018
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Nomenclature
NQD: Normalized Quadratic Distance
AQD: Acceptable Quadratic Distance
NQDS: Normalized Quadratic Distance with Sign
QDS: Quadratic Distance with Sign
C p : Constant added to production data
k : Number of uncertain attributes
p : Uncertain attributes from the simulation model
q : Uncertain attributes from the geostatistical model
DLHC: Discrete Latin Hypercube
R c : Cut-off value for the correlation coefficient
NIF: Number of Influenced Functions
FR-SF: Refined simulation model
FR-SC: Coarse simulation model
DFN: Discrete Fracture Network
1 Introduction
History matching is an important step in reservoir management as it can provide reliable models to forecast production. A lack of reliable, detailed information about reservoir characteristics leads to many uncertainties when modeling. Numerous oil reserves are in naturally fractured reservoirs, as noted by Nelson (2001), Allan and Sun (2003), and Bourbiaux (2010), with a volume of 20% of the World’s oil reserves and production as estimated by Firoozabadi (2000). Despite this potential, some oil reservoirs might have low recovery factors, as low as 9% (Allan and Sun, 2003). This suggests the need for new methodologies to improve productivity.
Natural fractures can act as flow corridors, improving the permeability and causing early water breakthrough; or as barriers, impeding the flow. Reservoirs with intense fracturing can deplete rapidly, going quickly from high initial production rates to low. Thus, fracture mischaracterization can lead to premature well depletion and low recovery factors while an adequate characterization results in well-informed management decisions and economically efficient field production.
Depending on the fracturing intensity, fractures play different roles in the reservoir. To better represent these reservoirs, some authors proposed classifying fractured reservoirs by the effect of fracture systems on the porosity and permeability of the reservoir (Nelson, 2001; Allan and Sun, 2003; Bourbiaux, 2010). In particular, Nelson (2001) classified fractured reservoirs into four types:
-
Type I: both porosity and permeability result from fractures.
-
Type II: the essential reservoir permeability comes mainly from fractures.
-
Type III: fractures increase the permeability of an already producible reservoir.
-
Type IV: fractures do not affect the porosity or the permeability of the reservoir, but create anisotropy (acting as barriers).
As these types of fractured reservoirs behave differently, the production strategy should reflect the reservoir behavior particular to each type (I, II, III or IV).
In this work, we focus on Type II, where fractures significantly affect the fluid flow. Such reservoirs are characterized by the presence of two distinct types of porous media: matrix and fractures, and we use dual porosity approach for simulation.
Hydrocarbons usually accumulate in the matrix porosity and flow to fractures, which work as permeable pathways to producer wells. Most of these reservoirs have very low matrix permeability (up to 10 mD), and fractures are the main fluid-conducting medium (Tran, 2004, p. 3). Thus, accurately modeling fracture networks improves the reliability of models.
According to Tran (2004), characterizing fractures is not straightforward as they form a complex system. There are uncertainties for attributes such as distribution, location, size and orientation that characterize the fracture network. Different data sources are used to characterize these attributes, as they are often present at different scales, from millimeter (cell scale) to kilometer (reservoir scale). Tran (2004) notes that no single tool can provide all the information needed (Tran, 2004, p. 8). For instance, seismic data and outcrops provide information on a regional scale; while well log, core, and formation micro scanner images provide information on a local scale.
Accurate representation of properties using different scales is difficult; small-scale fractures, smaller than a cell scale, can be represented as equivalent cell property, in a Representative Elementary Volume (REV). On the other hand, large fractures, which can reach kilometers, have to be discretized individually. Fractures of intermediate scales (fracture swarms) are difficult to model, as they are too numerous to model explicitly and too large to form an equivalent medium at simulator scale Bourbiaux (2010). The definition of the REV for fractured reservoirs is also difficult as highlighted and discussed by several authors (Warren and Root, 1963; Long et al., 1982; Gilman, 2003; Müller et al., 2010; Kuchuk et al., 2015). The size of the REV must be larger than the heterogeneity size and smaller than the macroscopic length-scale, for instance, well spacing (Royer et al., 2002; Gilman, 2003).
Integrating the static modeling into the history-matching process allows us to directly modify a geostatistical model, controlling attributes of individual fractures and getting geologically consistent models, i.e., honoring both static and dynamic information. Some studies proposed history matching the fracture attributes (Hu and Jenni, 2005; Gang and Kelkar, 2006), but most only match a few selected attributes, such as permeability, length, location, or size, or use a simplified reservoir. There is a lack of methodologies for complex reservoirs that include several fracture attributes.
To deal with attributes of different types, in this work, we propose a methodology to reduce uncertainty in naturally fractured reservoirs based on the big loop, which includes the update of the static model during the history matching process. There are two significant advantages of the proposed methodology:
-
The methodology is independent of scale and as such can be used for any uncertain reservoir attribute. It considers the whole process from the static modeling, allowing the combination of attributes of geostatistical and simulation models in a big loop approach. Modifying attributes of the Discrete Fracture Network (aperture, length, orientation, etc.) leads to geologically consistent models and allows reproducing the network at the end of the process. This is impossible when modifying the fracture attributes of the simulation model, as attributes are the result of geostatistical and simulation attributes combined.
-
Iterative sensitivity analysis. We focus on the most influential attributes at each uncertainty reduction step, maintaining those that are “non-influencing” (explained below in the methodology), to reduce the number of attributes and improve the likelihood of obtaining good combinations (models close to the history). The novelty of our methodology is that after each uncertainty reduction step, there is a new sensitivity analysis to re-evaluate all attributes. Thus, an attribute modified in a prior step can be changed in the following step if considered to be influential.
2 Literature review
The challenge of history matching is longstanding with many works focusing on the subject, as shown by Oliver and Chen (2011), and Rwechungura et al. (2011). However, some methods are unsuitable for naturally fractured reservoirs, such as modeling fractures as discrete objects. This section looks at different history-matching methods for naturally fractured reservoirs.
Hu and Blanc (1998) first proposed the gradual deformation method for facies modeling, it combines two geostatistical realizations to create a new one, constrained by the geological information. For example, Hu (2003) applied the Boolean model to represent geologic objects, such as faults and fractures, and the Poisson point pattern to create locations of objects in the reservoir. Then, he modified the object’s parameter using the gradual deformation method. He applied the method to a simple synthetic case to model flow barriers, and modified the object’s location. We propose the modification of several geostatistical attributes of fractures and the application to a practical case.
Hu and Jenni (2005) extended the work of Hu (2003) to include modifying characteristics of objects in the history matching, such as size, quantity, and shape through gradual deformation. Subsequently, Jenni et al. (2007) applied the gradual deformation method to large-scale fractures under seismic resolution (sub-seismic faults and fracture swarms). In our methodology, we also change the characteristics of each fracture but through modifying the probability distribution of the mean of attribute.
After Jenni et al. (2007), Verscheure et al. (2012) applied the gradual deformation method to change sub-seismic fault locations in a synthetic case with 121 × 150 × 1 blocks and average thickness of 10 m. They divided the reservoir in four regions and applied the deformation algorithm per region to match the water cut data. In this work we propose other method to modify fracture properties and use a 3D case for application.
Caers (2002) applied a training image to measure the correlation between multiple points, allowing better characterization of complex structures; replacing the variogram, which commonly measures the geological continuity, computing the correlation between two locations in space (two-point statistics). Afterwards, Caers (2003) combined the gradual deformation method and training image approach, with proxy models based on streamlines to a 2D case of a fractured reservoir. He used the training image to constrain new realizations; and the proxy, to model the modified permeability. In another application of training image approach, Jung et al. (2013) generated a set of training images from DFN models, and iteratively updated the occurrence probability of each image, discarding those with low probabilities. Despite working with DFN models, they did not change the attributes of the fractures, as we do in this work.
Cui and Kelkar (2005) based their method on the gradient, proposing the chain rule to compute the sensitivity of the fracture intensity to dynamic data, instead of changing properties directly from the simulation model, such as permeability and the sigma factor. Similarly, Gang and Kelkar (2006) proposed modifying the permeability of each fracture instead of the grid block effective permeability of the simulation model. Gang and Kelkar (2008) later included the water-oil capillary pressure and matched fracture permeability and water-oil capillary pressure, simultaneously. Fracture permeability is the result of a combination of different attributes, after the geostatistical simulation. We propose changing the attributes that are input to the geostatistical simulation.
The work of Basbug and Karpyn (2008) also addresses capillary pressure. They proposed a methodology using B-Spline functions to model relative permeability and the Brooks-Corey formula to model capillary pressure. They used a fractured core sample model for the application. We also consider relative permeability and capillary pressure, but in a practical case and mixing attributes from different scales.
To reduce the high computational effort required by naturally fractured reservoirs, some works proposed the streamline-based approach (Vasco et al., 1997; Bahar et al., 2003) while Al-Harbi et al. (2004) applied a dual-porosity streamline model to simulate fracture flow, treating fractures and matrix as separate continua, connected through a transfer function. They applied the concept of generalized travel time to match the water-cut data by modifying fracture permeability. While Al-Harbi et al. (2004) worked on the simulation model (coarse scale), we work with attributes from the geological model (fine scale). Moreover, we consider several Objective Functions, such as oil and water production rates, bottom-hole pressure and water injection rate.
Similarly, De Lima et al. (2009) proposed the application of connectivity analysis to characterize well-reservoir and well-to-well connectivity in fractured reservoirs. Again, the authors worked only on the simulation model while we use the geological and simulation models.
With technological advancement, intensive computational approaches have emerged in recent years (such as integrating static modeling into the history-matching process). In this context, including 4D seismic data in the history matching, called Seismic History Matching (SHM), has the potential to improve matching quality, as it allows saturation maps to be matched, as well as matching production data. For instance, Tolstukhin et al. (2012) applied the SHM to a portion of the Ekofisk field. They identified the eight most important attributes through traditional sensitivity analyses (one at a time), including fracture’s attributes. We propose an alternative; the application to a whole reservoir case, and a modified sensitivity analysis, performed iteratively.
Other approaches are: artificial neural networks (Al-Anazi and Babadagli, 2009), Markov Chain Monte Carlo (Ginting et al., 2011), probability perturbation method (Suzuki et al., 2007), recovery curve method (Ghaedi et al., 2015), Discrete Fracture Network flow simulator (Lange, 2009), Ensemble Kalman Filter (Lu and Zhang, 2015; Nejadi et al., 2015), and Kernel principal component analysis (Paico, 2008).
Most works use simplified reservoirs for their applications, such as 2D or large-scale vertical fractures crossing the reservoir. Moreover, many of these works modified few fracture attributes, such as quantity and location, or permeability and capillary pressure. However, we apply our methodology to a complex reservoir, with several fracture attributes.
The quantity of uncertain attributes is a concern in any history-matching problem. By dividing the range of the attribute into intervals, the number of combinations among attributes increases significantly. One solution to this problem is to use sensitivity analyses to select the most influential attributes and reduce the search space (Almeida Netto et al., 2003; Rotondi et al., 2006; Kassenov et al., 2014).
To identify the most important input parameters, Mckay et al. (1999) applied rLHS (replicated Latin Hypercube) and correlation ratio. In each iteration, they applied rLHS sampling, simulated the system they were studying, and selected the input parameter with the highest correlation ratio. Next, they maintained this parameter, and repeated the iteration, determining the parameter with the highest correlation ratio among those remaining, maintaining that for the next iteration. This way they sequentially evaluated the importance of each input and obtained the set of most important input parameters. They used an example of military aircrafts although LHS is suitable to analyze sensitivity in fractured reservoirs. We identify the most influential attributes, though using correlation matrix.
Ford and Flynn (2005) presented a statistical screening procedure applied to dynamic system models. In their study, they calculated the correlation coefficient of each uncertain input parameter for each period of the output. Then, they defined the desired period of analysis and evaluated the behavior of the correlation coefficient for that period. This way, they evaluated the change of the correlation coefficient during a certain period and determined the most important inputs. Afterwards, Taylor et al. (2010) proposed the six-step statistical screening approach. Similarly, we also evaluate the correlation coefficient, although using Latin Hypercube for discrete attributes to generate the combinations.
Feraille and Marrel (2012) applied the Morris method for screening and sensitivity analysis to identify most influential attributes of PUNQs field. They used the most influential attributes to build a proxy model and perform probabilistic history matching. In this work we propose different approach for screening influential attributes and show an application to case of a fractured reservoir.
Maschio and Schiozer (2016) applied the correlation coefficient to a petroleum reservoir. They used the correlation matrix to identify and select affected Objective Functions for each uncertain attribute. Then, they computed local Objective Functions (using only those influenced) for each attribute and generated new histograms for each one. This is the method we base our work on to update probability distributions of the most influential attributes.
As stated in the introduction, naturally fractured reservoirs contain more uncertain attributes when compared to non-fractured reservoirs. To address the gap in methodologies that model several fracture attributes in complex cases, we propose an iterative sensitivity analysis to assess many uncertain attributes and apply it to a complex case. We use Discrete Latin Hypercube (DLHC) in a history-matching process integrated with static modeling (as opposed to traditional history-matching methods, which do not include this integration), allowing direct contact with geostatistical attributes and consistent modifications of the reservoir model.
3 Methodology
Figure 1 shows the general history-matching workflow, introduced by Avansi et al. (2016). Our methodology contributes to Step 4, sampling and generating models; and Step 11, reducing global uncertainty. For sampling (Sect. 3.1), we propose the integration of attributes from the static modeling, such as fracture aperture, length and orientation, which generate images, with other attributes such as coefficient of relative permeability curve or capillary pressure, differently from previous works by, for instance, Maschio and Schiozer (2016) who applied regional multipliers and Avansi et al. (2016) who applied a virtual well method (similar to pilot point) to perturb images. For the uncertainty reduction step (Sect. 3.2), we propose the application of sensitivity analysis iteratively to reduce the uncertainty by groups of influential attributes. We also considered a different procedure to select models to update histograms and included the rescaling step, not considered by the cited authors.
Fig. 1. General-history matching workflow introduced by Avansi et al. (2016). The contributions of this work are on steps 4 and 11 highlighted in red. |
Below we summarize the 15 steps of the general history-matching workflow:
-
Defining uncertain attributes: define uncertain attributes, discretize their range of variation into levels, and use a discrete probability distribution to represent each attribute. By discretizing intervals, we limit the possible number of combinations, and also avoid redundant combinations (values too close), which can occur when sampling continuous intervals.
-
Defining acceptable tolerance: define the tolerance for observed values. Determine the acceptable quadratic distance (Eq. (5)) and normalize Objective Functions (Eq. (1)). Models with a value between −1 and 1 (range) are considered matched (see Tab. 1). In some specific situations, however, models outside this range can be accepted. This tolerance remains fixed during the whole uncertainty reduction process.
Table 1.Quality of models according to NQDS intervals.
-
Defining the process: choose history matching or uncertainty reduction. Although similar, they have different objectives. Reducing the uncertainty of attributes characterizes attributes to fit observed data, by modifying characteristics of the probability distribution. History matching finds the best history-matched model(s), normally using an optimization method.
-
Sampling and generation of models: generate n combinations of attributes (n models). More details in Section 3.1.
-
Numerical simulator: this step numerically simulates n models.
-
Diagnosis: analyze the quality of n models through NQD (Normalized Quadratic Distance), shown in equation (1):
To facilitate the visualization and analysis of the results, we use NQDS, which is the normalized quadratic distance, as defined in equation (2):(2)where, NQDS is the Normalized Quadratic Distance with Sign, QDS is the Quadratic Distance with Sign, and AQD is the Acceptable Quadratic Distance. The expression of QDS is: (3)where, N is the number of observed data, Sim is the simulated data, Hist is the observed data, and SD is the simple distance, computed by equation (3). (4)
The expression of AQD is:(5)where, Tol is the tolerance, given by a percentage of the observed data, and C p is a constant to avoid dividing by zero when the data has values close to zero, which can occur with water production rate. As highlighted by Maschio and Schiozer (2016), there is no standard rule for choosing Tol and C p, which might depend on engineering judgement. The choice of either depends on the reliability of observed data and usually depends on the type of data. For instance, Maschio and Schiozer (2016) give an example about gas rate. Usually, it is more difficult measure and measurement error tend to be high. Thus, for this variable the tolerance should be higher compared to others, such as oil rate.
-
Acceptable results: through diagnosis, assess if models are within the acceptance range or whether the uncertainty must be reduced. The graphical plot of the NQDS helps to assess this. Figure 2 gives an example of seven wells with NQDS plots. In this example, the range of acceptance is between the lines on −1 and 1.
Fig. 2. Example of NQDS plot.
In Figure 2, well 6 has the best match, where all models have NQDS within the acceptance range. For well 4 and well 5, an uncertainty reduction suffices. For wells 1, 2, 3, and 7 a review of the static model is necessary, to include models with NQDS above and below NQDS 0.
The interval [−1, 1] for NQDS is the final objective. However, in some cases, values outside this range might be accepted when it is not possible to obtain values within the range (when there are high uncertainties or lack of data). Table 1 shows the relationship between the quality of models and NQDS intervals.
Models with NQDS of 20 or less (absolute value) are still acceptable, depending on the case and on the objective (or phase) of the study. These intervals reflect the quality of the matching and were defined based on the authors’ practical experience with history-matching problems.
-
Numerical model: search for inconsistencies in the numerical model. A problem in well productivity is one reason to review the numerical model.
-
Analysis and changes: correct productivity index or well completion intervals.
-
Regional/local reduction: evaluate whether a regional/local matching step is necessary. After a global uncertainty reduction, for instance, this step can further improve the match.
-
Global uncertainty reduction: reduce global uncertainty of attributes. Further details of this step are in Section 3.2.
-
Regional/local uncertainty reduction: reduce regional/local uncertainty.
-
Increase the number of models: at the end of the process, when diagnosis indicates that the results are acceptable, we increase the number of models to increase the chance of having more filtered models.
-
Filter: after the uncertainty reduction, some models might still be out of the acceptance range. Thus, this step filters out these models, to select only those within the range.
-
Application: forecast production under uncertainties using models selected in Step 12.
3.1 Sampling and generation of models
In this section, we detail our method to generate the simulation models, combining the uncertain attributes regardless of scale. Figure 3 shows the three steps.
Fig. 3. Steps for sampling and model generation. |
Figure 3 shows how we generate a single simulation model from a combination of attributes, for instance k (q geostatistical model attributes and p simulation model attributes, k = p + q). The process of sampling and generating models can be summarized in three steps:
-
Combine levels of attributes. Using the DLHC, we combine attributes from both the simulation (e.g., fault transmissibility, relative permeability, capillary pressure) and the geostatistical models (fracture aperture, length and orientation). We use DLHC because it ensures that values over the whole range of a model attribute are selected. Schiozer et al. (2015) used the DLHC to combine simulation model attributes and geostatistical images (which they called Discretized Latin Hypercube combined with Geostatistical realizations – DLHG). They first generated n images and combined these with the attributes of the simulation model. Thus, their combination process occurred at a coarse scale. Unlike this method, we combine the attributes used to build the images instead, combining those at fine scale with those at a coarse scale. Hence, we first generate the combination of attributes from different scales and, then use the generated value in the workflow (Fig. 3) (e.g., the aperture in the geostatistical simulation and the exponent of the relative permeability curve, to generate the relative permeability curve).
-
Generate geological images using the set of attributes from the geostatistical model and upscale. If we have n combination of attributes, we will have n images (e.g. five images for five sets of attributes).
Figure 4 illustrates how a sampled fracture aperture is input into the geostatistical simulation, and the workflow to generate the fracture spacing and fracture permeability in the coarse grid.
Fig. 4. Process of generating simulated scenarios.
In Figure 4, DLHC sampled level 1 (0.11 mm) of a fracture aperture. This value corresponds to the expected value of the probability distribution (for instance, lognormal), which is input into the geostatistical simulation. The geostatistical simulation then generates one realization of the discrete fracture network, which is upscaled to generate the fracture spacing and permeability. For more details on DLHC, see Maschio and Schiozer (2016), and Schiozer et al. (2015).
Fracture permeability and space in I, J and K directions are inputs to the flow simulator. For instance, IMEX from CMG ® has an option to apply the Gilman and Kazemi (1983) matrix-fracture transfer coefficient, σ, as follows:(6)
In equation (6), k x , k y and k z are the permeability in x, y and z directions, L x , L y and L z are fracture space in x, y and z directions, and ∆ x *∆ y *∆ z is the total matrix volume.
Thus, the DFN must be upscaled to provide these inputs. Commercial software, such as Petrel (Schlumberger) has an internal algorithm which calculates these attributes for each grid cell. Basically, the algorithm, considering a cell as a cube, traces several perpendicular lines to each face and calculate distances between consecutive intersections and use them to compute fracture spacing in each direction.
-
Combine the image with the set of corresponding attributes of the simulation model.
3.2 Global uncertainty reduction of attributes
Figure 5 shows the workflow to reduce global uncertainty. One contribution of this work consists on proposing the application of sensitivity analysis after each uncertainty reduction step (Step 6). This way it is possible to identify and focus on groups of influential attributes.
Fig. 5. Workflow to reduce global uncertainty. |
The 7 steps in Figure 5 are explained below:
-
Initial sensitivity analysis: identify the attributes that most affect Objective Functions. To measure the influence, we use the correlation matrix (m rows of uncertain attributes and n columns of Objective Functions, or NQDS). Each cell of the matrix stores the correlation coefficient (R) between an attribute and an Objective Function. The attribute is influential if R > R c (threshold). For instance, if there are five Objective Functions, there are five R values for each attribute. If one value of R is greater than R c, then the attribute is influential, these influential attributes are grouped together for this iteration. The attributes are rated according to R values over R c, showing the order of influence.
-
Diagnosis: simulate models to assess the NQDS. Maintaining mean values for non-influential attributes, we run DLHC sampling again. We maintain these values to reduce the number of possible combinations of DLHC and validate the analysis. If the variation of the influential attributes causes major output variability, then NQDS variability varying only influential attributes and varying all attributes must be similar. In other words, we compare the NQDS plots before and after freezing the less influential attributes to verify if the most variably is preserved.
-
NQDS symmetry OK? Ideally, NQDS should be symmetric, but as some asymmetry is likely, it should be within an acceptance range. The presence of models with positive and negative NQDS is sufficient to reduce uncertainty, as the asymmetry can be further treated by a regional/local approach. The symmetry guarantees that models are not biased positively nor negatively, meaning that the combination of the uncertain attributes is properly covering the observed data. If they are biased, reducing uncertainty will not lead to matched models (see Fig. 2, wells 1 and 2).
-
Rescaling: this step follows a sequential approach to rescale the attributes according to how influential they are. The most influential attribute in a group is rescaled first and if necessary (if the asymmetry remains unimproved), the second most influential attribute is rescaled.
-
Uncertainty reduction: the uncertainty reduction method identifies the probability distribution of influential attributes. In other words, it updates the probability of each level of influential attributes. The method is based on the work by Maschio and Schiozer (2016) and is explained in Section 3.2.1. After reducing the uncertainty, we maintain the distributions for these attributes for the next iterations.
-
Sensitivity analysis: we allow non-influential attributes to vary again while maintaining the distribution of influential attributes. So non-influential attributes have their original probability distribution again (for instance, uniform distribution). Thus, we sample again varying all attributes with their respective probabilities. This is done because after reducing the uncertainty of the attributes strongly affecting Objective Functions, the influence of other attributes may become significant at this step (the strong influence of one attribute may hide the influence of another).
-
New group? If step 6 produces a new group of influential attributes step 5 is repeated, otherwise the global uncertainty reduction is complete.
3.2.1 Updating the probability of levels of attributes
Here, we go into further detail for the Step 5 listed above (Fig. 5), specifically on how to update the probability of each level of influential attribute. To update the probabilities, we based on Maschio and Schiozer (2016) who used correlation matrix and non-parametric density estimation to modify the probability distribution of attributes. It follows Steps 5.1–5.5 as presented on Figure 5. The input for Step 5 is a group of k influential attributes, with k < n (n is the number of uncertain attributes).
First, on Step 5.1 the method combines only influential attributes and identifies which one(s) influence Objective Functions (OF) through the correlation matrix; correlation coefficient R greater than R c. The correlation matrix is also applied on Steps 1 and 6 on the method (Fig. 5); the difference is that on Steps 1 and 6 it is used for all uncertain attributes and on Step 5.1 it is applied only on the group of influential attributes. Figure 6 shows the correlation matrix for one iteration to identify which attributes affect which Objective Functions, if any.
Fig. 6. Example of correlation matrix example identifying influential attributes Maschio and Schiozer (2016). |
“x” marks highlight values of R > R c. For instance, Attribute 1 influences Objective Functions OF1, 3 and 4, while attribute 4 does not significantly affect any Objective Function.
Next, we compute the Local Objective Function (LOF) using the equation:(7)where, NIF is the number of influenced functions (for instance, for attribute 1 NIF is 3 – OF1, 3 and 4). Thus, for Attribute 1, for example, LOF is computed taking into account the OF 1, 3 and 4. For Attribute 2, LOF is computed taking into account the OF 1 and 2; and so on. LOF is the arithmetic average of influenced Objective Functions (with R > R c). Note that for each influential attribute there is N values for LOF, where N is the number of models.
We then generate the new probability distribution of each influential attribute using the non-parametric Kernel density estimation method.
To select the models to build the histogram, we classify models in increasing order of LOF and select those lower than a defined threshold for LOF. If the selected number of models is less than 100, we select 60% of the models (first 0.6*N models). This is slightly different from Maschio and Schiozer (2016) method. They proposed two ways to select models: (1) select a fixed percentage from the ordered models of LOF, and (2) use a gradually increasing cut-off value of NQD to ensure a minimum number of models is selected.
We use these models to generate a histogram. Figure 7 shows an example for Attribute 1.
Fig. 7. Histogram example generated from models with LOF lower than an established threshold (Maschio and Schiozer, 2016). |
The histogram might contain discontinuities, which is undesirable in the initial stages of uncertainty reduction when the search space is still large and the number of combinations may be too small for proper coverage. To avoid discontinuities, the non-parametric Kernel density estimation generates the new probability distribution by smoothing the histogram. Figure 8 shows the resulting probability distribution.
Fig. 8. Smoothed probability distribution example generated by the non-parametric Kernel density estimation method (Maschio and Schiozer, 2016). |
This method eliminates the discontinuities without prematurely excluding levels as shown in Figure 8. For more detail on the theoretical background of this method, see Maschio and Schiozer (2016). One step not considered by the authors is the rescaling step. As it might occur, we included this step, as described in the following.
Despite smoothing the histogram, the Kernel method might be unable to restore eliminated levels for some situations. For example, when there are no discontinuities, but a high increasing/decreasing probability of levels. To illustrate, for an attribute with five levels, the uncertainty reduction eliminated the first three levels, assigning high probabilities to levels 3 and 4 (as shown in Fig. 9, in red). In this case, the kernel method may be able to restore only levels 1 and 2. This is because for more than 2 consecutive eliminated levels or highly skewed behavior of the distribution, (shown in Fig. 9), the Kernel method cannot restore all eliminated levels. This indicates that the attribute should have levels with higher values, so we rescale this attribute, keeping the number of levels, moving them in the direction indicated (by the method); in this case, to the right.
Fig. 9. Attribute rescaling example. |
When generating the new distribution, we can define the number and width of the levels, and the shape of the distribution. In this work, we maintain the number and width of the levels. We also use the form of the histogram (in red) to define the shape. Figure 9 shows the triangular distribution we used, indicated by the dashed line, resulting in the five levels indicated by solid line bars.
To illustrate the global uncertainty reduction methodology, Figure 10 shows a demonstration of the uncertainty reduction of two influential attribute groups.
Fig. 10. Uncertainty reduction for two attribute groups. |
Figure 10 shows that:
-
Sensitivity analysis (considering all attributes with uniform discrete distribution) identified aperture and orientation to be influential. These are grouped into Group 1. Other attributes have fixed mean values.
-
Uncertainty reduction for Group 1 resulted in the final distribution for aperture and orientation. We freeze these distributions from now on.
-
Allowing other attributes to vary again with a uniform discrete distribution, we identified a second group of influential attributes, matrix permeability and porosity. Now, Group 1 has fixed distribution (from step ii), Group 2 has uniform discrete distribution, and the others have a fixed mean value.
-
The uncertainty reduction for Group 2 resulted in the final distribution for matrix permeability and porosity. Thus, we find the distribution for Group 1 in step ii, and the distribution for Group 2 in step iii, other attributes have a uniform discrete distribution.
4 Case study
We apply our methodology to a synthetic carbonate field based on a real case. We constructed this reservoir due to the lack of information for a full characterization of a naturally fractured reservoir. The reservoir structure and information on reservoir permeability and fracture size are based on a real case. Using a synthetic case allows us to validate our methodology. The synthetic reservoir, called reference model (FR-R), is the “real” reservoir, with unknown characteristics. We then constructed a refined model (FR-SF) using the information from FR-R. Finally, we upscaled FR-SF to generate the coarse model (FR-SC).
4.1 Reference model (FR-R)
We constructed the reference, FR-R, model from 25 synthetic well logs of matrix porosity and permeability, facies distribution and fracture intensity. The model has a corner point grid of 35 × 56 × 30 blocks (58 500) with an average size of 100 × 100 × 10 m.
Fractures are four sided, and have average lengths, aperture and orientation of 300 m, 0.4 mm, and 255° respectively. We simulated fracture properties to create the Discrete Fracture Network (DFN), further converted to an equivalent grid property.
To generate the production history, we applied a production strategy composed of 17 vertical wells; 12 producers completed in the first four layers at the top, and 5 injectors completed in the last five layers at the bottom of the reservoir. Note that these wells are different from those used to build FR-R. We created a viable strategy based on a five spot, with plausible flow rates and recovery factors for naturally fractured reservoirs to demonstrate the method. To determine the best recovery strategy is out of the scope of this work.
Figure 11 shows the production strategy, and the matrix porosity map. The Simulation of FR-R model generated 8 years of production history. We no longer use this model after generating the history, as it is the “real” reservoir for which the characteristics are unknown.
Fig. 11. Matrix porosity and the production strategy. PROD represents producers and INJ represents injectors. |
4.2 Refined simulation model (FR-SF)
To construct the refined model, we used the 17 well logs (from the production strategy we have defined) for FR-R, for matrix porosity and permeability, facies distribution, and fracture intensity. This model has the same resolution as the FR-R, with a grid with 35 × 56 × 30 blocks (58 500) measuring 100 × 100 × 10 m.
The difference between FR-R and FR-SF is that FR-SF was constructed using well log information from the reference (“real”) reservoir FR-R. The FR-SF is constructed from properties sampled from FR-R at the 17 wells and it is the base to generate multiple models using different combinations of the uncertain attributes (described in Sect. 4.4). These models are further up scaled for flow simulation.
4.3 Coarse simulation model (FR-SC)
For flow simulation purposes, we upscaled the refined model (FR-SF) to create the coarse simulation model (FR-SC). In this work, we applied the upscaling in the vertical direction, grouping every two blocks and halving the number of blocks in the vertical direction.
The FR-SC model has a corner point grid of dimension 35 × 56 × 15 (29 400 blocks) with blocks measuring 100 × 100 × 20 m. As it is a typical case of Type II naturally fractured reservoir, we applied the dual porosity model to separate matrix and fracture grids. The main characteristics of this model are:
-
Reference pressure of 250 kgf/cm2, and a bubble point pressure of 201.5 kgf/cm2.
-
Depth of 2350 m and water-oil contact at 2530 m.
-
Oil of 38º API.
-
Model has two rock types. Rock type 1 with water-wet trend and rock type 2 with intermediate wettability.
Fractures have a straight-line relative permeability, which is frequently used as a good approximation (Mazo and Schiozer, 2013; Correia et al., 2016), and zero capillary pressure, which is a common practice when simulating naturally fractured reservoirs (Mazo and Schiozer, 2013).
During the history period, the producer wells are controlled by liquid rate and the injector wells are controlled by water injection rate.
4.4 Uncertain attributes
Table 2 shows uncertain attributes considered in this case. They are attributes which have potential impact on the reservoir performance.
Uncertain attributes, discretized into five levels.
Table 2 shows that, of the 18 uncertain attributes, 1–14 are from the geostatistical model and 15–18 are from the simulation model.
-
Variogram in x and y directions: These attributes influence the porosity map of the static model.
-
Attributes 3–7 are the proportion of each facies present in the reservoir. They are further grouped in specific rock type in the simulation model.
-
Correlation coefficient between porosity and permeability: these attributes vary the correlation between permeability and porosity of the static model. Thus, it has an impact on the permeability map.
-
Correlation coefficient between fracture intensity and the distance from a fault: these attributes quantify how much fractured is a region close to the fault. It affects the fracture intensity map.
-
Variogram in x and y direction of fracture intensity: these attributes also influence the fracture intensity map. They are the parameters of the variogram of fracture intensity map.
-
Fracture length, orientation and aperture: they are fracture characteristics. They define the size (length and aperture) and direction of fractures (orientation).
The remaining attributes (15–18) are the parameters of the expression of capillary pressure and relative permeability that generated the tables of these properties used in the simulation model. The five facies of the static model are grouped in two rock types in the simulation model:
-
Coefficient of relative permeability of rock type 1: it is the coefficient of the relative permeability of rock type 1 and it changes the water relative permeability curve of rock type 1.
-
Coefficient of capillary pressure of rock type 1: it impacts the capillary pressure of rock type 1, by changing its curve.
-
Coefficient of relative permeability of rock type 2: it changes the water relative permeability curve of rock type 2.
-
Coefficient of capillary pressure of rock type 2: it changes the capillary pressure curve of rock type 2.
For the range of variation of each attribute, we have defined the initial minimum/maximum values not pushed to the physical plausible limit so that in the case of rescaling (changes in the minimum/maximum values), we do not extrapolate the physical limits.
4.5 Parameterization for the uncertainty reduction
We used liquid and water injection rates as well production constraints for simulation. To match the observed reservoir behavior, we considered 56 Objective Functions (OF): oil production rate and bottom-hole pressure for 12 producers, water injection rate and bottom-hole pressure for five injectors, water production rate for all wells but PROD5 and PROD9, and water breakthrough time for 12 wells. As water rates of wells PROD5 and PROD9 are very low we did not consider as OF. We included the water breakthrough instead. We measure the quality of the match using normalized Quadratic Distance with Sign (Eq. (2)).
In the calculation of the acceptable quadratic distance given in equation (4), we used a tolerance of 0.05 (5%) for injected water rate and produced liquid rate (informed rates) and a tolerance of 0.1 (10%) for other OF, and C p of 10 m3/day for the water production rate of well PROD8, which has low production rate.
The choice of Tol and C p is subjective and depends on the reliability of observed data (Maschio and Schiozer, 2016). Using Tol higher than 10% the acceptable solutions may be too far from the history data. We made tests and observed that with a tolerance of 10% production curves of solutions were still in an acceptable range. Regarding injection water rate and produced liquid rate, as they are informed, we expect smaller deviation from the history. The constant C p is added to the production rates that are zero or close to zero. This way, we avoid the division by zero when calculating the OF. This value has to be small enough to not cause an impact on the OF value. We compared values of 5, 10 and 15 to choose the value of 10 m3/day.
A common concern for reservoir simulation is the number of simulations. Schiozer et al. (2015) suggested 100–300 to be a good number for risk quantification in their study. For this case, we ran tests and concluded that 500 simulations provide an acceptable range of variability of NQDS.
After simulation, we set the threshold at 0.3 (R c), based on the work of Maschio and Schiozer (2016), to consider an attribute as influential. They suggested a value between 0.1 and 0.4 after comparing results applying different values of R c. If we use values too high we might not identify influential attributes, and if we use values too low we end up choosing all attributes (we cannot identify groups). In this work, we decided to use 0.3 as the threshold.
We consider an attribute to be influential with at least one value of R > R c, i.e., at least one OF must be influenced. The R c value can be changed depending on the quality and amount of data, and required accuracy. For each selected attribute, we classify models according to the threshold value of LOF. We considered a value of five. There may only be few models with LOF under five, which is insufficient to construct the new histogram for an attribute. In these cases, we set a minimum of 60% of the models to generate the new histogram of an attribute.
5 Results and discussions
5.1 Diagnosis
Following the methodology, we simulated 500 models varying all attributes in their uncertain range. In a cluster with 10 processors AMD Opteron 64 bits, 2.6 GHz, it took, on average, 24 h to build and simulate these models.
Figure 12 shows the NQDS of (a) oil rates and (b) water rates for producers. The dashed lines show the threshold (5) of NQDS, (absolute value of NQDS).
Fig. 12. NQDS of (a) oil rate, and (b) water rate for each producer for the initial run (diagnosis). |
Figure 12 shows significant variability, which is important in the diagnosis step. The same observation is valid for other OFs. Asymmetry is visible in the NQDS values; oil rates to the negative side and water rates to the positive side. However, NQDS distribution encompasses the acceptance range for all wells, with positive and negative values of NQDS. In this case, we considered this asymmetry acceptable, and continued to the next step of the methodology, to choose the most influential attributes.
5.2 Group 1 of influential attributes
Table 3 shows part of the correlation matrix for oil and water rates.
Correlation matrix between uncertain attributes and NQDS; showing the matrix for oil and water rates.
In Table 3, each cell stores the correlation coefficient between an attribute and an Objective Function. There is one column for each of the 56 Objective Functions. The bolded cells highlight those with R > R c (0.3). For instance, the correlation coefficient between fracture aperture (Abf) and the NQDS of water production rate of well 11 is −0.88. As shown in Table 3, fracture aperture (Abf) has R greater than R c for several wells. This attribute affected almost all OFs with high R values. Fracture orientation (Orf) affected oil and water rates for some wells. Coefficient of capillary pressure of rock type 1 (Pcdf1) showed only one OF to be influenced. These three attributes form Group 1 of influential attributes.
5.3 Uncertainty reduction of Group 1 of influential attributes
We maintained all other attributes on their mean value and let Group 1 vary (uniform discrete distribution). We called this G1It1 (Group 1 Iteration 1). Figure 13 shows the NQDS of the diagnosis and G1It1.
Fig. 13. Comparison of NQDS of diagnosis and after the first iteration (G1It1) for (a) oil rate, and (b) water rate for each producing well. |
Figure 13 shows that the variability of NQDS for oil and water rates reduced for some wells. Possibly the strong influence of aperture is hiding the influence of other attributes. After reducing the uncertainty of Group 1, other attributes might appear as influential.
Next, using the LOF approach, we select and filter the models to construct the histogram for Group 1, and the Kernel method to smooth histograms and create new probability distributions. Figure 14 shows the smoothed histograms of Group 1 after a single iteration of the global uncertainty reduction methodology.
Fig. 14. Smoothed histograms after G1It1 for (a) fracture aperture, (b) fracture orientation, and (c) coefficient of capillary pressure of rock type 1. |
As Figure 14 shows, some levels have low frequency, i.e., high absolute LOF values. For instance, for fracture aperture, few models with level 0 have acceptable LOF values and models with level 4 resulted in better match. This resulted in a probability of 1% for level 0, and 6.3% for level 1 for the next iteration (G1It2).
Figure 15 shows smoothed histograms after G1It2.
Fig. 15. Smoothed histogram after G1It2 for (a) fracture aperture, (b) fracture orientation, and (c) coefficient of capillary pressure of rock type 1. |
In Figure 15, the histogram of fracture aperture shows the elimination of two levels, even after applying the kernel method, which “restored” level two only. Hence, we rescaled the fracture aperture and the other attributes assumed the probability distribution according to the presented histogram.
Following the criteria (defined in Sect. 3.2.1), we used a triangular distribution, as Figure 16 shows.
Fig. 16. Rescaling of fracture aperture using triangular distribution after two iterations. |
In Figure 16, the triangular distribution is centered on level 4 to honor the shape of the distribution. Moreover, we kept five levels to avoid reducing the quantity of levels. Table 4 shows the new levels.
New levels of fracture aperture after rescaling. In parenthesis are the probabilities of occurrence for each level.
In Table 4, the capital letter “A” is used to distinguish the new levels from the previous ones. The new distribution centers on level 4 before rescaling, with 36% probability. It is important to note that the new levels must honor the plausible values for each attribute. The value 0.791 is still a plausible value for fracture aperture.
After these modifications, we ran two more iteration, totaling four iterations for Group 1. Figure 17 shows the NQDS for the last two iterations, and the diagnosis for comparison.
Fig. 17. NQDS of the last two iterations for Group 1 (G1It3 and G1It4), and the diagnosis for comparison; (a) for producers’ oil rate, (b) for producers’ water rate, (c) for producers’ bottom-hole pressure, and (d) for injectors bottom-hole pressure. |
Figure 17 shows significantly improved matching quality overall. The uncertainty reduction of Group 1 resulted in a great match for oil rates for PROD1, 2, 8, 9, 11, and 12 (Fig. 17a), and water rates for PROD6, 11 and 12 (Fig. 17b). Furthermore, Figures 17c and 17d show good pressure matches. Almost all models are within the acceptance range for several wells, such as PROD1, 2, 3, 6, 8, 9, 10, 11, and 12. Attributes of Group 1 are all fracture related. In Type II reservoirs (see Sect. 1), fractures influence the flow capacity and, hence, have greater impact on pressures, which explains the better pressure matches.
5.4 Group 2 of influential attributes
After reducing the uncertainty for Group 1, we fixed distributions for the rest of the methodology. We then ran 500 simulations varying all other attributes with their initial distribution (uniform discrete distribution). The correlation matrix showed the fracture-fault correlation coefficient (Ckf) and fracture length (Cf) to be influential after reducing the uncertainty of Group 1. These attributes form Group 2 of influential attributes.
Hence, to reduce uncertainty for Group 2, we have:
-
Group 1: Attributes that vary according to a fixed probability distribution.
-
Group 2: Attributes that vary according to a probability distribution that will be modified during the uncertainty reduction to find the best shape.
-
Other attributes: With a fixed value, the mean of the original uniform distribution.
Therefore, there are five varying attributes, three from Group 1 and two from Group 2. The difference is that Group 2 will change the probability distribution of each attribute and Group 1 will have a fixed distribution.
5.5 Uncertainty reduction of Group 2 of influential attributes
We ran three iterations to reduce uncertainty for Group 2 (G2It3), and Figure 18 shows the boxplot of NQDS of oil and water rates for some producer wells. After some iterations, it is difficult to analyze graphics, such as those of Figure 17, as they look similar from one iteration to another. In these cases, the boxplot of NQDS enables better comparison between two iterations.
Fig. 18. Boxplot of NQDS of production rates after three iterations. Graphics show the last two iterations for each well in pairs; G2It2 to the left and G2It3 to the right. Figure (a) shows wells 1–6 for oil rate and Figure (b) shows wells 7–12 for water rate. |
Figure 18 shows similar variability of NQDS following the uncertainty reduction for Group 2 (G2It3). Thus, we finish the global uncertainty reduction of Group 2 as there was no significant improvement after the last iteration.
We run another sensitivity analysis varying all attributes, and the correlation matrix showed there is no other influential attribute. Therefore, the global uncertainty reduction is now complete. As NQDS shows, we achieved good matches for several wells. Figure 19 shows the oil and water production for well PROD6.
Fig. 19. Oil and water rates for PROD6. Gray lines are the initial diagnosis, green lines are the results after global uncertainty reduction (G3It1), and blue circles are the history. Brown lines are the curves where |NQDS| ≤ 1. |
Figure 19 shows significant reduction in the dispersion of the curves, which translates to good models, i.e., NQDS ranged between −1 and 1. Figure 20 shows the production forecast after the uncertainty reduction.
Fig. 20. Simulated cumulative (a) oil and (b) water production. In gray are models with |NQDS| ≥ 5 and in brown are models with |NQDS| < 5. In blue is the reference model. |
As shown in Figure 20, the field forecast production for oil shows good results and reduced the uncertainty of production in a field level. The field forecast production for water also shows good results for the history but higher production in the forecast period.
To better show the matching quality, Table 5 shows the percentage of models within the acceptance range. “Diag” is the diagnosis, “It” is iteration, and “Var”, the dynamic variable. “Ql” is produced liquid, “Qo” is produced oil, “Qwp” is water rate, “Wbt” is water breakthrough and “Qwi” injected water rate. “BHP” and “BHPi” are Bottom-Hole Pressures for producers and injectors.
Percentage of models within the acceptance range (−5 ≤ NQDS ≤ 5), for each well.
As seen in Table 5, Ql, BHP, Qwi, and BHPi improved after G2It3, as the percentage increased. For instance, the percentage of BHP for well 5 increased from 99.6% on G1It4 to 100% on G2It3. For Qo, Qwp, and Wbt, Wells PROD4, 6 and 10 improved after G2It3. Overall, there is clear improvement compared to the diagnosis.
Figure 21 shows the number of models as a function of NQDS cutoff. Each point is the number of models that have Objective Functions smaller than the corresponding value on the abscissa. Group 1 and Group 2 refer to the uncertainty reduction of Group 1 and Group 2 (after last iteration of each group).
Fig. 21. Number of accepted models as a function of NQDS cutoff. |
In Figure 21, it is clear the improvement for the oil production rate. For water breakthrough, although Group 2 shows models with NQDS cutoff higher than 10, there were several wells with good matches. This is because some wells have NQDS higher than 10.
Table 6 shows the percentage of models within the acceptance range for all wells and each dynamic variable.
Percentage of models with all wells within acceptance range (−5 ≤ NQDS ≤ 5) for each dynamic variable.
Table 6 shows the improvement after uncertainty reduction of Group 2 as more models are within acceptance range for almost all dynamic variables. However, there were no models with NQDS inside the acceptance range for all 56 dynamic variables.
The application of the methodology improved the matching quality of several wells. However, as the results show, the global uncertainty reduction is not sufficient to match all Objective Functions simultaneously. Thus, local refinements are necessary to achieve simultaneous match for all Objective Functions. Regional/local uncertainty reduction is topic for future works.
Additionally, sampling does not account for possible correlation between influential attributes. We consider they are independently distributed. Future works may propose a methodology to address the interaction between influential attributes.
6 Conclusion
We proposed a new methodology for uncertainty reduction in naturally fractured reservoirs, which allows the integration of history matching with geostatistical modeling of fractures and other properties in a big loop approach. We also proposed an iterative sensitivity analysis to reduce the number of attributes in the uncertainty reduction and so improve simulation time while still providing good matches. Furthermore, we demonstrated its applicability in practical situations through a complex case. The conclusions of this work are:
-
The iterative sensitivity analysis allowed to work with several key attributes, by reducing the number of uncertain attributes in each iteration. For instance, after reducing uncertainty of Group 1, we run sensitivity analysis and identified other attributes, previously hidden by more influential attributes, to affect OFs.
-
We reduced the overall NQDS by modifying only five attributes (after iteration G2It3), out of the initial 18. The advantage of fewer attributes is the improved chances of obtaining good combinations (models closer to the history), as the search space is significantly reduced. In this work, we reduced the possible combinations from 518 to 53 for the first iteration. Thus, the method allowed reducing the uncertainty with fewer simulation compared to a case varying all 18 uncertain attributes.
-
We combined geostatistical attributes and kept geological consistency. For instance, at the end of the methodology, we have the values of fracture aperture, length, and orientation. With these values, we can reproduce the discrete fracture network. If we had modified fracture spacing and permeability in the simulation model, we could not reproduce the discrete fracture network that generated the results.
-
Modifying attributes from geostatistical simulations together with the local Objective Function approach reduced the variability of NQDS, improving its values for all wells. As we worked on a Type II reservoir, fracture attributes have greater impact on pressures than flow rates, as shown in Figure 17 and Table 5, where NQDS exhibited quick reduction. The evaluation of NQDS per dynamic variable (Table 6) showed the improvement on a global level.
We intend to continue this work focusing on a local approach to improve regions of the reservoir, or individual wells.
Acknowledgments
This work was conducted with the support of Energi Simulation and Petrobras within the ANP R&D tax as “commitment to research and development investments”. The authors are grateful for the support of the Center of Petroleum Studies (CEPETRO-UNICAMP/Brazil), the Department of Energy (DE-FEM-UNICAMP/Brazil) and Research Group in Reservoir Simulation and Management (UNISIM-UNICAMP/Brazil). In addition, special thanks to CMG (IMEX) and Schlumberger (Petrel) for software licenses.
References
- Al-Anazi A., Babadagli T. (2009) Automatic fracture density update using smart data and artificial neural networks, Comput. Geosci. 36, 3, 335–347, https://doi.org/10.1016/j.cageo.2009.08.005. [CrossRef] [Google Scholar]
- Allan J., Sun S.Q. (2003, January) Controls on recovery factor in fractured reservoirs: lessons learned from 100 fractured fields, SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/84590-MS. [Google Scholar]
- Almeida Netto S.L., Schiozer D.J., Ligero E.L., Maschio C. (2003, January) History matching using uncertainty analysis, Canadian International Petroleum Conference, Petroleum Society of Canada, https://doi.org/10.2118/2003-145. [Google Scholar]
- Al-Harbi M., Cheng H., He Z., Datta-Gupta A. (2004, September) Streamline-Based Production Data Integration in Naturally Fractured Reservoirs, SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/89914-PA. [Google Scholar]
- Avansi G.D., Maschio C., Schiozer D.J. (2016) Simultaneous history matching approach using reservoir-characterization and reservoir-simulation studies, SPE Reserv. Evalu. Eng. 19, 694–712, https://doi.org/10.2118/179740-PA. [Google Scholar]
- Bahar A., Ates H., Al-Deeb M.H., Salem S.E., Badaam H., Kelkar M. (2003, January) Practical approach in modeling naturally fractured reservoir: A filed case study SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/84078-MS. [Google Scholar]
- Basbug B., Karpyn Z.T. (2008, October) Determination of relative permeability and capillary pressure curves using automated history-matching approach, SPE Eastern Regional/AAPG Eastern Section Joint Meeting, Society of Petroleum Engineers, https://doi.org/10.2118/117767-MS. [Google Scholar]
- Bourbiaux B. (2010) Fractured reservoir simulation: A challenging and rewarding issue, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 65, 2, 227–238, https://doi.org/10.2516/ogst/2009063. [CrossRef] [Google Scholar]
- Caers J. (2002, September) Geostatistical History matching under training-image based geological model constraints, SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/77429-MS. [Google Scholar]
- Caers J. (2003) Efficient gradual deformation using streamline-based proxy method, J. Petrol. Sci. Eng. 39, 57–83, https://doi.org/10.1016/S0920-4105(03)00040-8. [CrossRef] [Google Scholar]
- Correia M.G., Maschio C., von Hohendorff Filho J.C., Schiozer D.J. (2016) The impact of time-dependent matrix-fracture fluid transfer in upscaling match procedures, J. Petrol. Sci. Eng. 146, 752–763, https://doi.org/10.1016/j.petrol.2016.07.039. [CrossRef] [Google Scholar]
- Cui H., Kelkar M. (2005, April) Automatic history matching of naturally fractured reservoir and a case study, SPE Western Regional Meeting, Society of Petroleum Engineers, https://doi.org/10.2118/94037-MS. [Google Scholar]
- De Lima A., Lange A., Schiozer D.J. (2009, June) Assisted history matching for the characterization and recovery optimization of fractured reservoirs using connectivity analysis, EAGE Annual Conference & Exhibition incorporating SPE Europec, Society of Petroleum Engineers, https://doi.org/10.2118/154392-MS. [Google Scholar]
- Feraille M., Marrel A. (2012) Prediction under uncertainty on a mature field, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 67, 2, 193–206. [CrossRef] [Google Scholar]
- Firoozabadi A. (2000) Recovery mechanisms in fractured reservoirs and field performance, J. Can. Pet. Technol. 39, 11, 13–17, https://doi.org/10.2118/00-11-DAS. [CrossRef] [Google Scholar]
- Ford A., Flynn H. (2005) Statistical screening of system dynamic models, Syst. Dyn. Rev. 21, 4, 273–303, https://doi.org/10.1002/sdr.322. [CrossRef] [Google Scholar]
- Gang T., Kelkar M. (2006, April) Efficient history matching in naturally fractured reservoirs SPE/DOE Symposium on Improved Oil Recovery, Society of Petroleum Engineers, https://doi.org/10.2118/99578-MS. [Google Scholar]
- Gang T., Kelkar M. (2008) History matching for determination of fracture permeability and capillary pressure, SPE Reserv. Evalu. Eng. 11, 5, 813–822, https://doi.org/10.2118/101052-PA. [Google Scholar]
- Ghaedi M., Masihi M., Heinemann Z.E., Ghazanfari M.H. (2015) History matching of naturally fractured reservoirs based on the recovery curve method, J. Petrol. Sci. Eng. 126, 211–221, https://doi.org/10.1016/j.petrol.2014.12.002. [CrossRef] [Google Scholar]
- Gilman J.R. (2003, June) Practical aspects of simulation of fractured reservoirs, International forum on reservoir simulation, pp. 23–27, Buhl, Baden-Baden, Germany. [Google Scholar]
- Gilman J.R., Kazemi H. (1983) Improvements in simulation of naturally fractured reservoirs, Soc. Pet. Eng. J. 23, 4, 695–707. [CrossRef] [Google Scholar]
- Ginting V., Pereira F., Presho M., Wo S. (2011) Application of the two-stage Markov chain Monte Carlo method for characterization of fractured reservoirs using a surrogate flow model, Comput. Geosci. 15, 4, 691–707, https://doi.org/10.1007/s10596-011-9236-4. [CrossRef] [Google Scholar]
- Hu L.Y. (2003, April) History matching of object-based stochastic reservoir models, 13th Middle East Oil Show & Conference, Society of Petroleum Engineers, https://doi.org/10.2118/81503-MS [Google Scholar]
- Hu L.Y., Blanc G. (1998, September) Constraining a reservoir facies model to dynamic data using a gradual deformation method, ECMOR VI-6th European Conference on the Mathematics of Oil Recovery, European Association of Geoscientists & Engineers, https://doi.org/10.3997/2214-4609.201406609 [Google Scholar]
- Hu L.Y., Jenni S. (2005) History Matching of Object-Based Stochastic Reservoir Models, SPE J. 10, 3, 312–323, https://doi.org/10.2118/81503-MS. [CrossRef] [Google Scholar]
- Jenni S., Hu L.Y., Basquet R., de Marsily G., Bourbiaux B. (2007) History matching of a stochastic model of field-scale fractures: methodology and case study, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 62, 2, 265–276, https://doi.org/10.2516/ogst:2007022. [CrossRef] [Google Scholar]
- Jung A., Fenwick D., Caers J. (2013) Updating uncertainty in the conceptual geological representation of fractured reservoirs using production data, 75th EAGE Conference & Exhibition incorporating SPE EUROPEC, Society of Petroleum Engineers, https://doi.org/10.3997/2214-4609.20130924 [Google Scholar]
- Kassenov B., King G.R., Chaudhri M., Abdrakhmanova A., Jenkins S., Bateman P., Iskakov E. (2014, November) Efficient workflow for assisted history matching and brownfield design of experiments for the Tengiz field, SPE Annual Caspian Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/172329-MS. [Google Scholar]
- Kuchuk F., Biryukov D., Fitzpatrick T. (2015) Fractured-reservoir modeling and interpretation, SPE J. 20, 5, 983–1004,https://doi.org/10.2118/176030-PA. [CrossRef] [Google Scholar]
- Lange A.G. (2009) Assisted history matching for the characterization of fractured reservoirs, AAPG Bull. 93, 11, 1609–1619, https://doi.org/10.1306/08040909050. [CrossRef] [Google Scholar]
- Long J.C.S., Remer J.S., Wilson C.R., Witherspoon P.A. (1982) Porous media equivalents for networks of discontinuous fractures, Water Resour. Res. 18, 3, 645–658, https://doi.org/10.1029/WR018i003p00645. [CrossRef] [Google Scholar]
- Lu L., Zhang D. (2015) Assisted history matching for fractured reservoirs by use of hough-transform-based parameterization, SPE J. 20, 5, 942–961, https://doi.org/10.2118/176024-PA. [CrossRef] [Google Scholar]
- Maschio C., Schiozer D.J. (2016) Probabilistic history matching using discrete Latin Hypercube sampling and nonparametric density estimation, J. Petrol. Sci. Eng. 147, 98–115, https://doi.org/10.1016/j.petrol.2016.05.011. [CrossRef] [Google Scholar]
- Mazo E.O.M., Schiozer D.J. (2013, June) Modeling fracture relative permeability-what is the best option? 75th EAGE Conference & Exhibition incorporating SPE EUROPEC 2013, Society of Petroleum Engineers, https://doi.org/10.3997/2214-4609.20130867 [Google Scholar]
- Mckay M.D., Morrison J.D., Upton S.C. (1999) Evaluating prediction uncertainty in simulation models, Comput. Phys. Commun. 117, 44–51, https://doi.org/10.1016/S0010-4655(98)00155-6. [CrossRef] [Google Scholar]
- Müller C., Siegesmund S., Blum P. (2010) Evaluation of the representative elementary volume (REV) of a fractured geothermal sandstone reservoir, Environ. Earth Sci. 61, 8, 1713–1724. https://doi.org/10.1007/s12665-010-0485-7. [CrossRef] [Google Scholar]
- Nejadi S., Leung J.Y., Trivedi J.J., Virues C. (2015) Integrated characterization of hydraulically fractured shale-gas reservoirs – production history matching, SPE Reserv. Evalu. Eng. 18, 4, 481–494, https://doi.org/10.2118/171664-PA. [CrossRef] [Google Scholar]
- Nelson R.A. (2001) Geologic analysis of naturally fractured reservoirs, Gulf Professional Publishing, Houston, TX. [Google Scholar]
- Oliver D.S., Chen Y. (2011) Recent progress on reservoir history matching: a review, Comput. Geosci. 15, 1, 185–221, https://doi.org/10.1007/s10596-010-9194-2. [CrossRef] [Google Scholar]
- Paico D.H.R. (2008) A new procedure for history matching naturally fractured reservoirs (Dissertation, Stanford University, USA). Retrieved from https://pangea.stanford.edu/ERE/pdf/pereports/Engineer/Rojas08.pdf? [Google Scholar]
- Rotondi M., Nicotra G., Godi A., Contento F.M., Blunt M.J., Christie M. (2006, January) Hydrocarbon production forecast and uncertainty quantification: A field application SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers, https://doi.org/10.2118/102135-MS. [Google Scholar]
- Royer P., Auriault J.L., Lewandowska J., Serres C. (2002) Continuum modelling of contaminant transport in fractured porous media, Transp. Porous Media 49, 3, 333–359, https://doi.org/10.1023/A:1016272700063. [CrossRef] [Google Scholar]
- Rwechungura R.W., Dadashpour M., Kleppe J. (2011, March) Advanced history matching techniques reviewed, SPE Middle East Oil and Gas Show and Conference, Society of Petroleum Engineers, https://doi.org/10.2118/142497-MS. [Google Scholar]
- Schiozer D.J., Avansi G.D., dos Santos A.A.S. (2015) Risk Quantification combining geostatistical realizations and discretized Latin Hypercube, J. Braz. Soc. Mech. Sci. Eng. 1–13, https://doi.org/10.1007/s40430-016-0576-9. [Google Scholar]
- Suzuki S., Daly C., Caers J., Mueller D. (2007) History matching of naturally fractured reservoirs using elastic stress simulation and probability perturbation method, SPE J. 12, 1, 118–129, https://doi.org/10.2118/95498-PA. [CrossRef] [Google Scholar]
- Taylor T.R.B., Ford D.N., Ford A. (2010, December) Improving model understanding using statistical screening, Proceedings of the 2010 Winter Simulation Conference (WSC), https://doi.org/10.1002/sdr.428. [Google Scholar]
- Tolstukhin E., Lyngnes B., Sudan H.H. (2012, June) Ekofisk 4D seismic history matching workflow, EAGE Annual Conference & Exhibition incorporating EUROPEC, Society of Petroleum Engineers, https://doi.org/10.2118/154347-MS [Google Scholar]
- Tran N.H..(2004). Characterization and modelling of naturally fracture reservoirs (Doctoral thesis, University of New South Wales, Sydney, Australia). Retrieved from http://handle.unsw.edu.au/1959.4/20559. [Google Scholar]
- Vasco D.W., Datta-Gupta A., Long J.C.S. (1997) Integrating field production history in stochastic reservoir characterization, SPE Form. Eval. 12, 3, 149–156, https://doi.org/10.2118/36567-PA. [CrossRef] [Google Scholar]
- Verscheure M., Fourno A., Chilès J.P. (2012) Joint inversion of fracture model properties for CO2 storage monitoring or oil recovery history matching, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 67, 2, 221–235, https://doi.org/10.2516/ogst/2011176. [CrossRef] [Google Scholar]
- Warren J.E., Root P.J. (1963) The behavior of naturally fractured reservoirs, Soc. Pet. Eng. J. 3, 3, 245–255. [CrossRef] [Google Scholar]
All Tables
Correlation matrix between uncertain attributes and NQDS; showing the matrix for oil and water rates.
New levels of fracture aperture after rescaling. In parenthesis are the probabilities of occurrence for each level.
Percentage of models within the acceptance range (−5 ≤ NQDS ≤ 5), for each well.
Percentage of models with all wells within acceptance range (−5 ≤ NQDS ≤ 5) for each dynamic variable.
All Figures
Fig. 1. General-history matching workflow introduced by Avansi et al. (2016). The contributions of this work are on steps 4 and 11 highlighted in red. |
|
In the text |
Fig. 2. Example of NQDS plot. |
|
In the text |
Fig. 3. Steps for sampling and model generation. |
|
In the text |
Fig. 4. Process of generating simulated scenarios. |
|
In the text |
Fig. 5. Workflow to reduce global uncertainty. |
|
In the text |
Fig. 6. Example of correlation matrix example identifying influential attributes Maschio and Schiozer (2016). |
|
In the text |
Fig. 7. Histogram example generated from models with LOF lower than an established threshold (Maschio and Schiozer, 2016). |
|
In the text |
Fig. 8. Smoothed probability distribution example generated by the non-parametric Kernel density estimation method (Maschio and Schiozer, 2016). |
|
In the text |
Fig. 9. Attribute rescaling example. |
|
In the text |
Fig. 10. Uncertainty reduction for two attribute groups. |
|
In the text |
Fig. 11. Matrix porosity and the production strategy. PROD represents producers and INJ represents injectors. |
|
In the text |
Fig. 12. NQDS of (a) oil rate, and (b) water rate for each producer for the initial run (diagnosis). |
|
In the text |
Fig. 13. Comparison of NQDS of diagnosis and after the first iteration (G1It1) for (a) oil rate, and (b) water rate for each producing well. |
|
In the text |
Fig. 14. Smoothed histograms after G1It1 for (a) fracture aperture, (b) fracture orientation, and (c) coefficient of capillary pressure of rock type 1. |
|
In the text |
Fig. 15. Smoothed histogram after G1It2 for (a) fracture aperture, (b) fracture orientation, and (c) coefficient of capillary pressure of rock type 1. |
|
In the text |
Fig. 16. Rescaling of fracture aperture using triangular distribution after two iterations. |
|
In the text |
Fig. 17. NQDS of the last two iterations for Group 1 (G1It3 and G1It4), and the diagnosis for comparison; (a) for producers’ oil rate, (b) for producers’ water rate, (c) for producers’ bottom-hole pressure, and (d) for injectors bottom-hole pressure. |
|
In the text |
Fig. 18. Boxplot of NQDS of production rates after three iterations. Graphics show the last two iterations for each well in pairs; G2It2 to the left and G2It3 to the right. Figure (a) shows wells 1–6 for oil rate and Figure (b) shows wells 7–12 for water rate. |
|
In the text |
Fig. 19. Oil and water rates for PROD6. Gray lines are the initial diagnosis, green lines are the results after global uncertainty reduction (G3It1), and blue circles are the history. Brown lines are the curves where |NQDS| ≤ 1. |
|
In the text |
Fig. 20. Simulated cumulative (a) oil and (b) water production. In gray are models with |NQDS| ≥ 5 and in brown are models with |NQDS| < 5. In blue is the reference model. |
|
In the text |
Fig. 21. Number of accepted models as a function of NQDS cutoff. |
|
In the text |