Regular Article
On the prediction of pseudo relative permeability curves: metaheuristics versus QuasiMonte Carlo
Institute of Enhanced Oil Recovery, Center for Exploration and Production Studies and Research, Research Institute of Petroleum Industry (RIPI), PO Box 146651998, Tehran, Iran
^{*} Corresponding author: fazelb@ripi.ir
Received:
11
March
2018
Accepted:
6
March
2019
This article reports the first application of the QuasiMonte Carlo (QMC) method for estimation of the pseudo relative permeability curves. In this regards, the performance of several metaheuristics algorithms have also been compared versus QMC, including the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and the Artificial Bee Colony (ABC). The mechanism of minimizing the objectivefunction has been studied, for each method. The QMC has outperformed its counterparts in terms of accuracy and efficiently sweeping the entire search domain. Nevertheless, its computational time requirement is obtained in excess to the metaheuristics algorithms.
© B. Fazelabdolabadi et al., published by IFP Energies nouvelles, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
As an integral part of the economics of a field, the selection of an optimum development plan is greatly reliant on the accuracy of relative permeability information. Such information is practically provided by laboratory measurements on rock samples – to be later used as model inputs capable of capturing detailed geological heterogeneities. In practice, usage of models with length scales comparable to rock samples is hurdled; due to the computational time requirement. Consequently, effort is made to construct and use coarser models, with essentially larger block sizes, based on the information in the fine models. In this regards, the laboratorymeasured relative permeabilities (rock curves) need to be adjusted, prior to applying to the coarser models. In other words, the relative permeability curves should be upscaled for the coarser models; otherwise possible errors may arouse because of disregarding the reservoir heterogeneity.
The earliest attempts on upscaling rock curves, used the concept of pseudofunctions (Coats et al., 1971; Hearn, 1971; Jacks et al., 1973; Kyte and Berry, 1975). Nevertheless, their further development was hindered because of two reasons. First, the dependency of pseudofunctions to the boundary conditions and the position of blocks in the coarse grid provide inevitable errors, when using them for larger models. Second, the difficulty faced in generating new pseudofunctions for each new scenario shall be practically insurmountable (Artus and Noetinger, 2004). As an alternative to the pseudofunction method, the HistoryMatching (HM) technique was later proposed for estimation of the pseudo relative permeability curves, for coarse models (Fayazi et al., 2016; Johnson et al., 1982; Tan, 1995; Wang et al., 2009).
On a trial basis, the historymatching method seeks relative permeability curves (for a coarse model) that can reproduce data obtained from a corresponding model with a fine structure. Since its inception, machine search algorithms have been extensively used in the HM process, to reduce the time requirements for its implementation. In a sense, HM is a mathematical minimization problem that invokes optimization algorithms to find the appropriate pseudo relative permeability curves. For this sake, usage of optimization algorithms is rampant within the HM implementation.
The literature reports on usage of several optimization schemes for HM purposes, such as gradientbased and stochastic (Hajizadeh et al., 2010; Zhang and Reynolds, 2002). In gradientbased algorithms, the search direction is determined by calculating the Hessian matrix or gradient of the objective function with respect to model parameters. Some of gradientbased algorithms include conjugate gradient, GaussNewton, QuasiNewton and steepest descent. In spite of their favorable convergence, the gradientbased algorithms may be bounded to a local optimum in vicinity of the initial guess (Landa et al., 2005). Moreover, these methods may not be appropriate for largescale reservoirs, with complex objective functions (Hou et al., 2012).
In comparison, stochastic methods are suitable for problems, with essentially complex objective functions. The wellknown stochastic algorithms include the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Neighborhood Algorithm (NA). Finding global optimum through stochastic methods, however may require a large number of runs to reach an appropriate convergence – a potential drawback for fieldscale purposes (Hou et al., 2012). The application of numerical techniques, such as Ensemble Kalman Filter (EnKF) and Ensemblebased Optimization (EnOpt) have also been introduced to HM, to speed up on its convergence (Aanonsen and Naeval, 2009; Chen et al., 2008).
The present article contributes to the existing literature in this field, in two venues. First, it reports on the first application of the QuasiMonte Carlo (QMC) technique, for handling the optimization problem encountered in the HM process. Second, it provides a benchmark comparison between the QMC method and several metaheuristics techniques.
The rest of the article is organized, as follows. The next section will mention the mathematical foundations as well as the algorithm details for the several optimization methods used. A description of our results is provided in the third section of the article, ensued by our concluding remarks.
2 Methods
The obtained pseudoK _{r} value may alter significantly from the experimental counterparts, when applied to field. This is due to confluent effects of different phenomena – finegrid heterogeneity, numerical dispersion, capillary pressure – (Fouda, 2016; Zarifi et al., 2012). As such, the conventional K _{r} estimation technique, such as Corey method, may be ineffective in yielding accurate pseudo K _{r} values – deriving the need to adopt correlationfree methods with flexibility in prediction pseudo K _{r} values on a pointbased framework. The method proposed herein possesses this feature and hence can be quite effective in capturing any complexity that arises due to field effects. The matter of finding the optimal relative permeability curves can be handled within an optimization context. In this sense, the problem can be equivalently expressed as finding the optimal value to an objective function, f, in an sdimensional domain. In practice, the number of dimensions (to be considered) is the number of saturation points, at which data is sought for the corresponding relative permeability curves, K _{ro} and K _{rw}. In the present work, a domain of 18 dimensions was considered (s = 18) – a number of nine dimensions for each curve. The choice was rendered as we were interested in computing the relative permeability values at water saturations of S _{w} = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], for each curve. Consequently, a point may be conceived in an sdimensional space, ω, with entries of the relative permeability curves at the considered water saturations (Eq. (1)):
$$\omega \mathrm{}=\left({x}_{1},...,{x}_{s}\right),$$(1)with ${x}_{i}={{K}_{\mathrm{ro}}}_{{S}_{\mathrm{wi}}}$ for (1 ≤ i ≤ 9) and ${x}_{i}={{K}_{\mathrm{rw}}}_{{S}_{\mathrm{wi}}}$ for (10 ≤ i ≤ s). As the point represents two curves simultaneously, it can be input into a reservoir simulator. Generally, the entire field production behavior including oil and water flow rates and pressure data versus time is used to achieve an acceptable match between the fine and coarse model during history matching (Lee and Seinfeld, 1987; Kulkarni and DattaGupta, 2000). Hence, output results for the FieldOilProduction Rate (FOPR), FieldPressure (FPR) and FieldWaterCutTotal (FWCT), corresponding to the point being considered. The objective function incorporates these outputs, and sums the differences against a benchmark case (Eq. (2)),
$$f\left(\omega \right)=\sum (\left\mathrm{FOP}{\mathrm{R}}_{\omega}\mathrm{FOP}{\mathrm{R}}_{\mathrm{benchmark}}\right+\sum (\left\mathrm{FP}{\mathrm{R}}_{\omega}\mathrm{FP}{\mathrm{R}}_{\mathrm{benchmark}}\right+\sum (\left\mathrm{FWC}{\mathrm{T}}_{\omega}\mathrm{FWC}{\mathrm{T}}_{\mathrm{benchmark}}\right.$$(2)
Presumably, the relative permeability curves will be of true values, if the results of the benchmark case can be reproduced, out of the simulator. This, however, should correspond to the exact moment, when the value of the objective function reaches its global minimum. The original issue is therefore shrunk down to a minimization problem, in an sdimensional space.
A plethora of optimization techniques were used to minimize f, as encountered in the prediction of relative permeability curves. We adopted metaheuristics as well as QMC techniques. A description of the mathematical principle behind each method is given in the following lines.
2.1 QuasiMonte Carlo
Let Ω be a separable topological space in an sdimensional space. Clearly, any point in Ω can be described by a set of s values, ω = (x _{1}, x _{2},…, x _{ s }) for ${x}_{i}\in {\mathfrak{R}}_{1\le i\le s}$. Let f be a realvalued function on Ω, for which a global minimum is sought. Since f is assumed to hold a global minimum in the region of interest, it is bounded from below and we define its global minimum as:
$$m\left(f\right)=\mathrm{mi}{\mathrm{n}}_{\omega \in \Omega}f\left(\omega \right).$$(3)
Let λ be a probability measure on Ω. Furthermore, consider S as a sequence of N independent λdistributed random samples ω _{1}, ω _{2},…, ω _{ N } ∈ Ω. We define,
$${m}_{N}\left(f;S\right)=\mathrm{mi}{\mathrm{n}}_{1\le i\le N}f\left({\omega}_{i}\right).$$(4)
The QMC method of quasirandom search makes use of a deterministic sequence of points ω _{1}, ω _{2},…, ω _{ N } in Ω, in order to find the global infimum. It is proved that m _{ N } (f; S) converges to the global minimum of f with unit probability, if f is continuous and if a positive probability measure (λ > 0) is taken for every nonempty subset of Ω (Niederreiter, 1994),
$$\mathrm{li}{\mathrm{m}}_{N\to \mathrm{\infty}}{m}_{N}\left(f;S\right)=m\left(f\right).$$(5)
Consider a point set P = (ω _{1}, ω _{2},…, ω _{ N }). The dispersion of point P in Ω is defined by:
$$\mathrm{dis}{\mathrm{p}}_{N}\left(P;\mathrm{\Omega}\right)=\mathrm{max}\hspace{0.5em}\mathrm{mi}{\mathrm{n}}_{1\le i\le N}\hspace{1em}\mathrm{disp}\left(\omega ,{\omega}_{i}\right),$$(6)where,
$$\mathrm{disp}\left(\zeta ,\gamma \right)=\mathrm{ma}{\mathrm{x}}_{1\le i\le s}\left{\zeta}_{i}{\gamma}_{i}\right\hspace{1em}\hspace{1em}\mathrm{for}$$
$${\zeta}_{i}=\left({y}_{1},{y}_{2},...,{y}_{s}\right);{\gamma}_{i}=\left({z}_{1},{z}_{2},...,{z}_{s}\right).$$(7)
The dispersion can be viewed as a measure of deviation from denseness of S in Ω (Niederreiter, 1994). Let $\Psi (f;t)=\mathrm{ma}{\mathrm{x}}_{\zeta ,\hspace{1em}\gamma \in \Omega ,\hspace{1em}\mathrm{disp}(\zeta ,\gamma )\le t}\leftf\left(\zeta \right)f\left(\gamma \right)\rightt\ge 0$, be the modulus of continuity of f. It is proved (Niederreiter, 1994) that for any point set P of N points in Ω, with dispersion disp_{ N } = disp_{ N } (P; Ω), we have:
$$m\left(f\right){m}_{N}\left(f\right)\le \Psi \left(f;\mathrm{dis}{\mathrm{p}}_{N}\right),$$(8)if the metric space (Ω, disp) is bounded (i.e., $\mathrm{ma}{\mathrm{x}}_{\zeta ,\hspace{0.5em}\gamma \in \Omega}\hspace{1em}\mathrm{disp}(\zeta ,\gamma )$ < ∞). Thus, the theorem shows point sets with small dispersion as being considered suitable for quasirandom search purposes (Niederreiter, 1994).
The fate of a quasirandom search largely depends on the condition disp_{ N } < ε, in which ε is a positive number less than 1/2 (Lei, 2002). Taking I ^{ s } = [0, 1]^{ s }, an absolute low bound for dispersion of any N points in I ^{ s } is (1/2)N ^{−1/s } (Niederreiter, 1984). It later follows that N must at least be of an order of magnitude ε ^{−s } (Lei, 2002).
A typical point set used for the quasirandom search should also possess nice properties on its discrepancy – interpreted as the difference between the empirical distribution of the QMC point set and the uniform distribution (Drew and HomemdeMello, 2006). For a given point set $P=({\omega}_{1},{\omega}_{2},...,{\omega}_{N}{)}_{{\omega}_{i}\in {I}^{s}}$ and a subset $G\subset {I}^{s}$, we take S _{ N } (G) as the number of points ω _{ i } ∈ G, and define discrepancy as (Lei, 2002):
$$\mathrm{disp}\left({\omega}_{1},{\omega}_{2},\dots ,{\omega}_{N}\right)=\mathrm{ma}{\mathrm{x}}_{\tau \in {I}^{s}}\left\frac{{S}_{N}\left({G}_{\tau}\right)}{N}{\tau}_{1}{\tau}_{2},\dots ,{\tau}_{s}\right,$$(9)whereG _{ τ } = [0, τ _{1})×,…, ×[0, τ _{ s }) is a rectangular region with an sdimensional volume equal to τ _{1}, τ _{2},…, τ _{ s }.
QMC deals with infinite Low Discrepancy Sequences (LDS) – possessing the additional property that for arbitrary N the initial segments have relatively small discrepancy (Lei, 2002). The merits of LDS are twofold; they provide uniform sample points avoiding large gaps or clusters. In addition, they know about their predecessors and fill the gaps left from the previous iterations (Kucherenko, 2006) – eliminating empty areas in which no information on the behavior of the underlying problem can be deducted.
The choice of LDS is therefore central to the QMC methodology. Different principles have been put to generate LDS sets (Bratley et al., 1992; Niederreiter, 1984, 1987, 1994; Sobol, 1976). We adopted the methodology developed by Sobol (1976). While other theories, such as Niederreiter’s (1994) result in the better asymptotic properties (Kucherenko, 2006), Sobol’s LDS sets provide enhanced reliability in terms of rapid convergence in high dimensionality situations (Jäckel, 2002). A description of the Sobol’s methodology for generating lowdiscrepancy sequences is, however, deferred to the Appendix A; to keep this text within a reasonable length.
Once an LDS set is available, the multistart QMC algorithm implements an inexpensive local search (such as the steepest descent) on the quasirandom points to concentrate the sample, which will subsequently be reduced by replacing the worse points (with higher function values) with the new quasirandom points. A completely new local search will then be applied to any point retained for a certain number of iterations. Two types of stopping criteria may be conceived for this algorithm. First, if no new value for the local minimum is found after several iterations (Glover and Kochenberger, 2003). Second, if the Number of Worse Stationary Points (NWSP) exceeds the Number of Stationary Points (NSP), usually by a fraction of three (Hickernell and Yuan, 1997). The reader is referred to the Appendix B of the article, for a more detailed description of the QMC algorithm.
2.2 Artificial Bee Colony
Originally inspired by the foraging behavior of a honeybee colony, the Artificial Bee Colony (ABC) algorithm consists of three fundamental components: food source, employed bees, and unemployed bees. For an explanation of the ABC method, however, we stick to the original terminology proposed by Karaboga and Akay (2009), so as to conform.
Then employed bees are those employed at, and currently exploiting, a certain food source. They carry information about the (distance, direction and the profitability) of the food source and communicate the information with other bees waiting at the hive. The unemployed bees are classified as being either a scout bee, or an onlooker one. The former randomly searches the environment to find a new (better) food source; while the latter seeks a food source by means of the information passed by an employed bee. An employed bee whose food source is depleted becomes a scout bee, and starts to search for a new food source. The method further assumes the number of employed bees in the colony, to be equal to the number of food sources (Karaboga and Akay, 2009). In practice, the position of a food source represents a possible solution to the optimization problem; whereas the amount of a food source corresponds to the quality (fitness) of the associated solution.
Once a sample of N (trial) solution points is generated within the search domain, ${\omega}_{i;1\le i\le N}$, its entries (food source positions) are subject to repeated ABC cycles, unless a termination criterion is not satisfied. Each ABC cycle entails tasks performed by each of the bee types, in which both global and local searches and selections are implemented. Since the tasks performed by each bee type is naturally different, they can be explained separately, as follows.
An employed bee generates a candidate solution point for the ith food source to which it has been assigned, ${\omega}_{\mathrm{perturb},i}^{\mathrm{iter}}$ for 1 ≤ i ≤ N, by perturbing the (old) solution point, ${\omega}_{i}^{\mathrm{iter}}$.
$${\omega}_{\mathrm{perturb},i}^{\mathrm{iter}}={\omega}_{i}^{\mathrm{iter}}+\mathrm{rand}[1,+1]({\omega}_{i}^{\mathrm{iter}}{\omega}_{i}^{\mathrm{iterr}})$$(10)where, iterr ∈ {1,…, iter} (iter ≠ iterr), is a randomlyselected index, and rand [−1, +1] is a random number between [−1, +1], which acts as a scaling factor. The employed bee later makes a selection between ${\omega}_{\mathrm{perturb},i}^{\mathrm{iter}}$, and ${\omega}_{i}^{\mathrm{iter}}$, by the one which holds a lower function value, and refreshes its memory of the best solution point.
An onlooker bee assigns itself to a solution point (food source), with a probabilistic selection scheme. The scheme requires the probability values associated with food sources, as the input. The probability measure associated to the ith food source, prob_{ i }, is obtained by:
$${\mathrm{prob}}_{i}=\frac{\mathrm{fi}{\mathrm{t}}_{i}}{\sum _{j=1}^{\mathrm{iter}}\mathrm{fi}{\mathrm{t}}_{j}},$$(11)with,
$$\mathrm{fi}{\mathrm{t}}_{i}=\{\begin{array}{cc}\frac{1}{1+f\left({\omega}_{i}^{\mathrm{iter}}\right)}& ({f}_{i}\ge 0)\\ 1+\leftf\left({\omega}_{i}^{\mathrm{iter}}\right)\right& ({f}_{i}<0)\end{array}.$$(12)
Assignment of an onlooker bee to a given food source (solution point) is sanctioned, provided that the probability of that solution point becomes greater than a randomlyselected number in the range [0, 1]. Once assigned, an onlooker bee should perturb its solution point (Eq. (10)), in search for an improved local minimum. Needless to mention that the perturbations are damped, as the point converges.
A scout bee should probe the sdimensional space randomly, in search for a global solution point. A scout bee is unrestricted in the sense that it can adopt any location (point), as long as it remains inside the search space boundaries. This is opposed to the strategy taken by the employed or onlooker bees, because they are only allowed to select points generated around a previous solution point. On the condition that a local solution point cannot be improved further after a given number of iterates, it will be abandoned, and the functionality of the employed bees assigned to that point will be converted to a scouttype. In practice, the limit for this conversion is taken as half of the number of unemployed bees, in each dimension.
The possibility of being trapped in a local minimum shall be minimal in ABC, as scout bees independently search the entire search space; while other bee types simultaneously probe their local candidates for the global minimum point. The algorithm to perform an ABC optimization is listed in Appendix C. For the sake of ABC optimization, we considered N = 18 solution points, at each iterate.
2.3 Particle Swarm Optimization
Discovered through simulation of a social model by Kennedy and Eberhart (1995), the PSO is a populationbased stochastic optimization procedure. The algorithm deals with repeated operations on a swarm – sample of N (trial) solution points within the search domain, ${\omega}_{i;1\le i\le N}$. The points (or particles as named in the PSO) are continuously perturbed, in each dimension, using a velocity term, ω _{velocity}. The velocity term for a dimension in a given point, is updated, during each iteration, by combining the information of the (previous) velocity of that point, with the values of its current distance measured against its bestrecorded location, as well as the location of the global minimum point discovered thus far (Eq. (13)):
$${\omega}_{\mathrm{velocit}{\mathrm{y}}_{i,j}}^{\mathrm{iter}+1}=\mathrm{coe}{\mathrm{f}}_{1}\times {\omega}_{\mathrm{velocit}{\mathrm{y}}_{i,j}}^{\mathrm{iter}}+\mathrm{coe}{\mathrm{f}}_{2}\times \mathrm{ran}{\mathrm{d}}_{1}\times \left({\omega}_{\mathrm{bes}{\mathrm{t}}_{i,j}}{\omega}_{i,j}^{\mathrm{iter}}\right)+\mathrm{coe}{\mathrm{f}}_{3}\times \mathrm{ran}{\mathrm{d}}_{2}\times \left({\omega}_{\mathrm{FBES}{\mathrm{T}}_{i,j}}{\omega}_{i,j}^{\mathrm{iter}}\right),$$(13)where coef_{ i;1<i≤3} are some input parameters to PSO, rand_{ i;1≤i≤2} are random coefficients derived from a uniform probability distribution in the range [0,1], ω _{best} is the bestrecorded location for the point (particle), and ω _{FBEST} represents the point with the global minimum value, recorded up to the iterate. In equation (13), the first and second indices, i and j, refer to the point and dimension numbers, respectively. Having perturbed the solution points, the information of the best performances in the swarm is renewed, after each cycle. The algorithm stops upon satisfaction of a stopping criterion. The algorithm is explained in Appendix D. Implementation of the PSO in this article was carried out, using parameters N = 40, coef_{1} = 0.72134, coef_{2} = 1.1931, coef_{3} = 1.1931.
2.4 Genetic Algorithm
Perhaps the mostlyused amongst the metaheuristics methods, the Genetic Algorithm (GA) is based on two basic operations – crossover and mutation. The former feeds in two solution points (or individuals as named in GA) and creates a single point by mixing their features together. The latter assigns a random value to one of the features. As a result, the main parameters consist of two probability measures for each of the operations. We shall refer the reader to the available literature (Coley, 1999; Haupt and Haupt, 2004; Simon, 2013; Sivanandam and Deepa, 2007; Yu and Gen, 2010), to gain indepth knowledge on the method. Our optimization results for the GA, were produced using a probability measure of 0.8 (for crossover), and 0.1 (for mutation). In this sake, N = 50 points were used.
3 Results and discussions
3.1 Model description
The fine model considered herein, is a threedimensional, heterogeneous and anisotropic reservoir model with a total number of 9000 (30 × 30 × 10) grid blocks. Two distinct zones with different petrophysical properties have been considered in the fine grid model. The porosity and permeability are distributed randomly on the fine grid using a truncated normal distribution based on the parameters presented in Table 1. Noflow condition is assumed at the boundaries of the model. The reservoir is initially at an oil saturation of 0.75. A water flooding process in one quarter of a five spot pattern is simulated. An injection well and a production well are placed in the opposite corners of the reservoir model. Both wells are completed through all the reservoir thickness, with no mechanical skin. The black oil reservoir simulator ECLIPSE 100™ was used, for the all simulations conducted. The fine model properties used in the simulator are enlisted in Table 2. The set of relative permeability curves used in the fine model is also shown in Figure 1 – assuming only one rock type in the model. The coarse model considered comprised of 200 grid blocks (10 × 10 × 2). The porosity of the coarse grid was estimated through the volume weighted average (Eq. (14)) – dividing the total void volume of all the fine grid blocks building the coarse grid block, by the total bulk volume of these fine grid blocks,
$$\overline{\phi}=\frac{\sum _{k}\sum _{j}\sum _{i}\left(\mathrm{V\phi}\right)}{\sum _{k}\sum _{j}\sum _{i}\left(V\right)},$$(14)where V and φ are fine grid block volume and porosity, respectively.
Fig. 1 The relative permeability curves used in the fine model. 
The porosity and permeability distribution parameters in the finegrid model.
The finegrid model characteristics.
As for the permeability estimation, a combination of the harmonic and arithmetic averages was used. For instance, the permeability in the xdirection was estimated by equation (15):
$${\overline{k}}_{x}=\frac{\sum _{i}\left(\Delta X\right)}{\sum _{i}\frac{\Delta X}{\left(\frac{\sum _{k}\sum _{j}\left({k}_{x}A\right)}{\sum _{k}\sum _{j}\left(A\right)}\right)}},$$(15)where ∆X is size of fine grid block in xdirection, k_{ x } is fine grid block permeability in xdirection and A is cross section of fine grid block perpendicular to x direction.
Figures 2a and b represent the permeability map for the fine and the coarse grid models, respectively. The porosity map for the fine and the coarse grid models are presented in Figure 3.
Fig. 2 The permeability map for the (a) finegrid and (b) coarsegrid models. 
Fig. 3 The porosity map for the (a) finegrid and (b) coarsegrid models. 
The stability of a water flooding depends on the total mobility ratio evaluated across the front (M_{ f }) which is calculated by equation (16). According to Buckley Leverett calculation, M_{ f } = 0.876 which means a stable displacement in the considered fine grid model. Figure 4 depicts some snapshots of saturation map given at breakthrough time at two views for the fine and coarse grid models. First one is the horizontal displacement in bottom zone. Second snapshot is vertical displacement from injection well to production well. It seems that horizontal displacement is uniform and stable which is in agreement with abovementioned conclusion; however, vertical displacement shows some kind of instability because of contrast in top and bottom zones petrophysical properties as well as density of water.
$${M}_{f}=\frac{\lambda \left({S}_{f}\right)}{\lambda \left({S}_{\mathrm{min}}\right)}=\frac{{\mu}_{o}{k}_{\mathrm{rw}}\left({S}_{f}\right)+{\mu}_{w}{k}_{\mathrm{ro}}\left({S}_{f}\right)}{{\mu}_{o}{k}_{\mathrm{rw}}\left({S}_{\mathrm{min}}\right)+{\mu}_{w}{k}_{\mathrm{ro}}\left({S}_{\mathrm{min}}\right)}.$$(16)
Fig. 4 The snapshots of saturation map for (a) horizontal displacement in coarse grid, (b) horizontal displacement in fine grid, (c) vertical displacement from injection to production well in fine grid and (d) vertical displacement from injection to production well in coarse grid. 
3.2 Results
The results obtained from simulations of the finegrid system are compared against the results obtained from the corresponding coarsegrid system, using different numerical algorithms adopted (GA, PSO, ABC, QMC). The results for the oil production rate, reservoir pressure/watercut versus time are used for comparison. Figure 5 shows the computed oil production rate in both systems, over a period of 15 years. Clearly, all the methods tested, have been capable of accurately reproducing the field’s oil production rate, as compared with the finegrid case. The situation is somewhat different for the reservoir pressure estimations (Fig. 6). As it can be noticed, the PSO outputs are the closest to the finegrid reservoir pressures; while the ABC results show the most deviations. With an exemption of ABC, all the methods have estimated satisfactory results for the watercut, over the time span considered (Fig. 7).
Fig. 5 The estimated oil production rates versus time. 
Fig. 6 The estimated reservoir pressures versus time. 
Fig. 7 The estimated watercuts versus time. 
The pseudo relative permeability curves estimated by each technique are presented in Figure 8. The deviation of the pseudo relative permeability curves from the rock curves is expected, owing to the altered heterogeneities and numerical dispersion in the coarsegrid system. On the other hand, the pseudo relative permeability curves estimated by methods adopted are seemingly identical – pinpointing the uniqueness of the relative permeability curves for the coarsegrid system considered. This finding should be logical, as the underlying physics related to the problem, should be independent of the type of the numerical algorithm being employed. The uniqueness of the generated pseudo relative permeability curves can also be interpreted in terms of the algorithms having successfully reached the global minimum in their entities.
Fig. 8 The estimated pseudo relative permeability curves versus the rock curves. 
In this work, the choice of a (new) trial relative permeability curve was made directly, by randomly selecting an entry from a pool of plausible choices. This pool consisted of nearly 500 ECLIPSEacceptable curves generated in vicinity of the last optimum curve detected, during each iterate. This renders model representation of relative permeability curves rather unnecessary. Our methodology also makes no use of the transformation of the control points, in the Bspline process (Chen et al., 2008) – obviating the need to solve a system of linear equations.
As the size of the system (to be simulated) may increase in the field situation, the time requirement on the simulations may become an issue. Therefore, one may wish to find out the order of proximity towards the global optimum within each iterate. For this sake, understanding the mechanism of target approach is essential. To complement this work, we have also reported this mechanism for the numerical methods adopted (GA, PSO, ABC, QMC) in Figure 9. As evident from the figure, the QMC outputs have plummeted to the global optimum within few iterates, followed by the GA. On the other hand, the approaching mechanism of the PSO/ABC has been rather steadily decreasing. This behavior of the QMC method stems from its capability of efficiently searching the entire search space. The only drawback to QMC, however, remains in its time requirement (Fig. 10). For a given number of iterates, our results indicate that the QMC has taken more time to implement, compared to the metaheuristics algorithms. We have also measured the maximum error encountered in each method, against the corresponding finegrid results (Fig. 11). Since the computed reservoir pressure curve looks to have the most deviation – in all methods considered – this maximum error was practically taken as the percentage change in result in the worst situation, when compared with the corresponding pressure value from the finegrid model. The QMC has again gained supremacy over the metaheuristics methods used, in this regards.
Fig. 9 The mechanism of objectivefunction minimization in the numerical methods adopted. 
Fig. 10 The runtime of the numerical methods adopted. 
Fig. 11 Maximum error encountered in each numerical method, compared against the finegrid results. 
4 Conclusion
The application and performance of several metaheuristics methods (GA, PSO, ABC) were investigated versus the QMC technique, for forecasting the pseudo relative permeability curves in a synthetic model. The QMC method outperforms in the metaheuristics counterparts in the accuracy and the efficient search strategy of visiting the entire problem domain. Our methodology to randomly select the new trial relative permeability curve from a pool of simulatoracceptable curves in vicinity of the last optimum curve detected, has rendered the model representation of curves rather unnecessary. In addition, this strategy has obviated the need to solve a system of linear equations – commonly encountered in the method of transformation of the control points, in the Bspline scenario. The mechanism of the QMC approaching the global minimum – dropping suddenly to the proximity of the solution – should be taken as its other advantage. The only drawback to QMC though, resides in its computational time requirement, being higher than the metaheuristics methods, as our results indicate.
References
 Aanonsen S.I., Naeval G. (2009) The ensemble Kalman filter in reservoir engineeringa review, SPE J. 14, 393–412. doi: 10.2118/117274PA. [CrossRef] [Google Scholar]
 Artus V., Noetinger B. (2004) Upscaling twophase flow in heterogeneous reservoirs: current trends, Oil Gas Sci. Technol.  Rev. IFP Energies nouvelles 59, 185–195. [CrossRef] [Google Scholar]
 Bratley P., Fox B.L., Niederreiter H. (1992) Implementation and tests of low discrepancy sequences, ACM. T. Model. Comput. Simul. 2, 195–213. doi: 10.1145/146382.146385. [CrossRef] [Google Scholar]
 Chen Y., Oliver D.S., Zhang G.M. (2008) Efficient ensemblebased closedloop production optimization, SPE J. 14, 20–23. doi: 10.2118/112873PA. [Google Scholar]
 Coats K.H., Dempsey J.R., Henderson J.H. (1971) The use of vertical equilibrium in twodimensional simulation of threedimensional reservoir performance, SPE J. 11, 63–71. [Google Scholar]
 Coley D. (1999) An introduction to genetic algorithms for scientists and engineers, World Scientific Pub. Co., Inc., Singapore. [CrossRef] [Google Scholar]
 Drew S., HomemdeMello T. (2006) QuasiMonte Carlo strategies for stochastic optimization, in: Proceedings of the 2006 Winter Simulation Conference, 3–6 December, Monterey, CA, pp. 774–782. [Google Scholar]
 Fayazi A., Bagherzadeh H., Shahrabadi A. (2016) Estimation of pseudo relative permeability curves for a heterogeneous reservoir with a new automatic history matching algorithm, J. Pet. Sci. Eng. 140, 154–163. doi: 10.1016/j.petrol.2016.01.013. [CrossRef] [Google Scholar]
 Fouda M.A.G. (2016) Relative permeability upscaling for heterogeneous reservoir models, PhD Dissertation, HeriotWatt University, Edinburgh, UK. [Google Scholar]
 Glover F., Kochenberger G. (2003) Handbook of metaheuristics, Kluwer Academic Publishers, Dordrecht, The Netherlands. [CrossRef] [Google Scholar]
 Hajizadeh Y., Demyanov V., Mohamed L., Christie M. (2010) Comparison of evolutionary and swarm intelligence methods for history matching and uncertainty quantification in petroleum reservoir models, in: Intelligent Computational Optimization in Engineering, Springer, Berlin/Heidelberg, Germany, pp. 209–240. [Google Scholar]
 Haupt R.L., Haupt S.E. (2004) Practical genetic algorithms, 2nd edn., John Wiley & Sons, New York, NY. [Google Scholar]
 Hearn C.L. (1971) Simulation of stratified water flooding by pseudo relative permeability curves, J. Pet. Technol. 23, 805–813. [CrossRef] [Google Scholar]
 Hickernell F.J., Yuan Y. (1997) A simple multistart algorithm for global optimization, OR Trans. 1, 2. [Google Scholar]
 Hou J., Wang D., Luo F., Zhang Y.H. (2012) A review on the numerical inversion methods of relative permeability curves, Procedia Eng. 29, 375–380. doi: 10.1016/j.proeng.2011.12.726. [CrossRef] [Google Scholar]
 Jäckel P. (2002) Monte Carlo methods in finance, John Wiley and Sons, New York, NY. [Google Scholar]
 Jacks H.H., Smith O.J.E., Mattax C.C. (1973) The modeling of a threedimensional reservoir with a twodimensional reservoir simulator – the use of dynamic pseudo functions, SPE J. 13, 175–185. [Google Scholar]
 Johnson J.B., Nanney M.M., Killough J.E., Lin Y.T. (1982) The Kuparuk River field: a regression approach to pseudorelative permeabilities, SPE 10531. doi: 10.2118/10531MS. [Google Scholar]
 Karaboga D., Akay B. (2009) A comparative study of Artificial Bee Colony algorithm, Appl. Math. Comput. 214, 1, 108–132. doi: 10.1016/j.amc.2009.03.090. [Google Scholar]
 Kennedy J., Eberhart R. (1995) Particle swarm optimization, Proc. IEEE Int. Conf. Neural Netw. IV, 1942–1948. doi: 10.1109/ICNN.1995.488968. [CrossRef] [Google Scholar]
 Kucherenko S. (2006) Application of Quasi Monte Carlo methods in global optimization, Glob. Optim. 5, 111–133. doi: 10.1007/0387305289_5. [CrossRef] [Google Scholar]
 Kulkarni K.N., DattaGupta A. (2000) Estimating relative permeability from production data: a streamline approach, SPE J. 5, 4, 402–411. doi: 10.2118/66907PA. [CrossRef] [Google Scholar]
 Kyte J.R., Berry D.W. (1975) New pseudo functions to control numerical dispersion, SPE J. 15, 269–276. [Google Scholar]
 Landa J., Kalia R.K., Nakano A., Nomura K., Vashishta P. (2005) History match and associated forecast uncertainty analysispractical approaches using cluster computing, International Petroleum Technology Conference, 21–23 November, Doha, Qatar. IPTC10751MS. doi: 10.2523/IPTC10751MS. [Google Scholar]
 Lee T., Seinfeld J.H. (1987) Estimation of absolute and relative permeabilities in petroleum reservoirs, Inverse Probl. 3, 4, 711–728. doi: 10.1088/02665611/3/4/015. [CrossRef] [Google Scholar]
 Lei G. (2002) Adaptive random search in QuasiMonte Carlo methods for global optimization, Comput. Math. Appl. 43, 747–754. doi: 10.1016/S08981221(01)003182. [CrossRef] [Google Scholar]
 Niederreiter H. (1984) On the measure of denseness for sequences, in: Topics of classical number theory, Colloquia Math. Sot. Janos Bolyai. 34, North Holland, Amsterdam, 1163–1208. [Google Scholar]
 Niederreiter H. (1987) Point sets and sequences with small discrepancy, Monatsch. Math. 104, 273–337. doi: 10.1007/bf01294651. [CrossRef] [Google Scholar]
 Niederreiter H. (1994) Random number generation and QuasiMonte Carlo methods, Society for Industrial and Applied Mathematics, Philadelphia, PA. [Google Scholar]
 Simon D. (2013) Evolutionary optimization algorithms, John Wiley & Sons, Hoboken, New Jersey. [Google Scholar]
 Sivanandam S., Deepa S. (2007) Introduction to genetic algorithms, SpringerVerlag, Berlin, Heidelberg, Germany. [Google Scholar]
 Sobol I.M. (1976) Uniformly distributed sequences with an additional uniform property, U.S.S.R. Comput. Maths. Math. 16, 236–242. doi: 10.1016/00415553(76)901543. [CrossRef] [Google Scholar]
 Tan T.B. (1995) Estimating two and three dimensional pseudorelative permeabilities with nonlinear regression, Reservoir Simulation Symposium, San Antonio, Texas. doi: 10.2118/29129MS. [Google Scholar]
 Wang K., Killough J.E., Sepehrnoori K. (2009) A new upscaling method of relative permeability curves for reservoir simulation, SPE Annual Technical Conference and Exhibition, 4–7 October, New Orleans, LA. doi: 10.2118/124819MS. [Google Scholar]
 Yu X., Gen M. (2010) Introduction to evolutionary algorithms, SpringerVerlag, Berlin, Heidelberg, Germany. [Google Scholar]
 Zarifi A.H., Sahraei E., Parvizi H. (2012) Pseudo relative permeability compensation for numerical dispersion, Pet. Sci. Technol. 30, 15, 1529–1538. doi: 10.1080/10916466.2010.503217. [CrossRef] [Google Scholar]
 Zhang F., Reynolds A.C. (2002) Optimization algorithms for automatic history matching of production data, Proceedings of 8th European Conference on the Mathematics of Oil Recovery, Freiberg, Germany. doi: 10.3997/22144609.201405958. [Google Scholar]
Appendix A
The construction of Sobol’s lowdiscrepancy sequences
The initial stage in generating a Sobol LDS set deals with operation on a set of integers in the interval [1, 2^{b}−1], where b represents the number of bits in an unsigned integer on the operating computer (typically b = 32). Let x_{nk} be the nth draw of one of Sobol’s integers in dimension k.
Generation of numbers in the Sobol’s method, is based on a set of direction integers. A distinct direction integer is considered for each of the b bits in the binary integer representation. Let v_{kl} denote the lth direction integer for dimension k. In order to construct Sobol’s numbers, one needs to evaluate the direction integers, first. This process involves the binary coefficients of a selected primitive modulo two for each dimension (Jäckel, 2002). Take p_{k} as the primitive polynomial modulo two for dimension k with the degree g_{k} (defined by equation (A1)). We assume a_{k0}…a_{kg} representing the coefficients of p_{k}, with a_{k0} being the coefficient of the highest monomial term.
$${p}_{k}\left(z\right)=\sum _{j=0}^{{g}_{k}}{a}_{\mathrm{kj}}{z}^{\mathrm{gk}j}$$(A1)
In each dimension, the first g_{k} direction integers v_{kl} for l = 1…g_{k} are allowed to be freely chosen for the associated p_{k} of the dimension, provided that two conditions are met. First, the lth leftmost bit of the direction integer v_{kl} must be set. Second, only the l leftmost bits can be nonzero, where the leftmost bit refers to the most significant one in a bit field representation. All subsequent direction integers are calculated from a recurrence relation (A2) (Jäckel, 2002):
$$\begin{array}{cc}{v}_{\mathrm{kl}}=\frac{{v}_{k}\left(l{g}_{k}\right)}{{2}^{{g}_{k}}}{\oplus}_{2}\sum _{j=1}^{{g}_{k}}{}_{{\oplus}_{2}}{a}_{\mathrm{kj}}{v}_{k(1j)}& \mathrm{for}l{g}_{k}\end{array}.$$(A2)
Hereby, ${\oplus}_{2}$ represents the binary addition of integers modulo two (often referred to in the computer science literature as the XOR gate), and $\begin{array}{c}\dots \\ \sum \\ \dots \end{array}{}_{{\oplus}_{2}}$ stands for a set of XOR operations. The procedure is to rightshift the direction integer v_{k} (l−g_{k}) by g_{k} bits, and then performing the XOR operation with a selection of the unshifted direction integers v_{k} (l−j) for j = 1,…,g_{k}. The summation is performed analogous to the conventional ∑ summation operator.
The only remaining requirement for the algorithm is the generating integer of the nth draw. For this sake, the natural choice appears to be the draw number itself, n. Nevertheless, any other sequence with a unique integer for each new draw is equally useful (Jäckel, 2002). Once all the preliminaries are set, the Sobol’s integers, for the s dimensions of interest, are generated by (Jäckel, 2002);
$${x}_{\mathrm{nk}}=\sum _{j=1}^{s}{}_{{\oplus}_{2}}{v}_{\mathrm{kj}}1.$$(A3)
In which the jth bit of the generating integer is set (counting from the right).
Jäckel (2002) has provided tabulated initialization numbers for generating Sobol’s integers, up to a dimension of 32 (Table A1). The generated sequence, using these initialization numbers, possesses the property; such that for any binary segment of the sdimensional sequence of length 2^{s} there is exactly one draw in each of the 2^{s} hypercubes which result from subdividing the unit hypercube along each of its unit length extensions into half (Jäckel, 2002).
An instance of the initialisation numbers for generating Sobol’s LDS, up to a dimension of 32 (Jäckel, 2002).
Once generated, conversion of Sobol’s integers to other scales is fairly straightforward. For example, they can be converted to the [0, 1] scale by dividing the integers by 2^{b}.
Appendix B
The algorithm to perform QuasiMonte Carlo minimization
Assume ${\omega}_{i}^{\mathrm{iter}}$ to represent the best solution for ith point at the iterth iteration, also consider FBEST as the best (minimum) value of f, recorded up to the iterth iteration. A detailed description of the QMC procedure is then ensued as follows (Hickernell and Yuan, 1997):
Step0 Initialize
Input the number of initial points, N, the number of points with best (lowest) objective function values to retain in each iteration, N_{best}, and the desired number of iterations to be done for local search on each of the points, Niter_{local.search}.
Set the number of iterations, iter = 0
Set NSP = 0; NSWP = 0; NTIX(j) = 0 for (i ≤ j ≤ N)
Step1 Concentrate
Obtain a new point set, by applying Niter_{local.search} iteration(s) of an inexpensive local search to each of ${\omega}_{i}^{\mathrm{iter}}$ points (1 ≤i ≤ N)
Step2 Reduce
Find $\mathrm{\Xi}\left(\mathrm{iter}\right)\subset \{1,\dots ,N\}$ such that $\mathrm{\Xi}\left(\mathrm{iter}\right)$ has N_{best} elements and that $f\left({\omega}_{i}^{\mathrm{iter}}\right)\le f\left({\omega}_{j}^{\mathrm{iter}}\right)$ $\forall i\in \mathrm{\Xi}\left(\mathrm{iter}\right)$ and $\forall j\notin \mathrm{\Xi}\left(\mathrm{iter}\right)$
If $j\in \mathrm{\Xi}\left(\mathrm{iter}\right)$, set NTIX (j) = NTIX (j) + 1
If $j\notin \mathrm{\Xi}\left(\mathrm{iter}\right)$, set NTIX (j) = 0
Step3 Find local minimum
For j = 1,…, N such that NTIX (j) ≥ 2
Set NTIX (j) = 0
If NSP = 0 or $f\left({\omega}_{j}^{\mathrm{iter}}\right)\le \mathrm{FBEST}+{10}^{4}$ then
Starting from ${\omega}_{j}^{\mathrm{iter}}$, perform a local optimization search, to obtain the local minimize of the point, ${\omega}_{j,\mathrm{local}.\mathrm{min}}^{\mathrm{iter}}$.
If $f\left({\omega}_{j,\mathrm{local}.\mathrm{min}}^{\mathrm{iter}}\right)<\mathrm{FBEST}$ then
Set NSP = NSP + 1; NSWP = 0; FBEST = $f\left({\omega}_{j,\mathrm{local}.\mathrm{min}}^{\mathrm{iter}}\right)$
Else
Set NSWP = NSWP + 1
End
Else
If $\frac{\mathrm{NSWP}}{\mathrm{NSP}}\ge 3$ then stop (success)
Step4 Sample additional points
For j = 1, 2, …, N
If NTIX (j) = 0 then
Generate ${\omega}_{j}^{\mathrm{iter}+1}$ by the Sobol’s LDS technique
Else
Set ${\omega}_{j}^{\mathrm{iter}+1}={\omega}_{j}^{\mathrm{iter}}$
End
Set iter = iter + 1
If the total number of function calls reached then stop (failure).
Go to Step1.
Appendix C
The algorithm to perform the Artificial Bee Colony (ABC) minimization
Assume ${\omega}_{i}^{\mathrm{iter}}$to represent the best solution for ith point at the iterth iteration, also consider FBEST as the best (minimum) value of the function, f, recorded up to the iterth iteration. The ABC optimization algorithm then works in the following steps:
Step0 Initialize
Input the number of initial points, N, the required relative tolerance of solution as a stopping criteria, ε, and the maximum number of cycles to be performed, Niter_{max}.
Step1 Find initial minimum
Set iter = 1
Generate N solution points, ${\omega}_{i\hspace{1em}1\le i\le N}^{\mathrm{iter}}$, by (uniform) random selection within the sdimensional domain.
Set FBEST = min (f (${\omega}_{i}^{\mathrm{iter}}$)) for 1 ≤ i ≤ N.
Step2 Repeat
Employed bees
Perturb each solution point, based on its latest solution history (Eq. (10)), obtain ${\omega}_{\mathrm{perturb},i;1\le i\le N}^{\mathrm{iter}}$ solutions set.
For i = 1, …, N
If $f\left({\omega}_{\mathrm{perturb},i}^{\mathrm{iter}}\right)<f\left({\omega}_{i}^{\mathrm{iter}}\right)$ then
${\omega}_{i}^{\mathrm{iter}}={\omega}_{\mathrm{perturb},i}^{\mathrm{iter}}$
End
Abandon the ith employed bee, if its solution cannot be improved, and convert it to a scouttype bee.
Onlooker bees
Compute the probability of each solution point, based on its corresponding solution history (eq.), obtain ${\mathrm{prob}}_{{i}_{1\le i\le N}}$.
For i = 1,…, N
If rand[0, 1]≤ prob_{i} then
Assign the ith onlooker bee to the ith solution set.
Else
Assign the ith onlooker bee to a different solution set, if the probabilistic measure is met.
End
Perturb each solution point of an onlooker bee, based on its latest history (Eq. (10)), obtain ${\omega \text{'}}_{\mathrm{perturb},{i}_{1\le i\le N}}^{\mathrm{iter}}$solutions set.
For i = 1, …,N
If $f\left({\omega \text{'}}_{\mathrm{perturb},i}^{\mathrm{iter}}\right)<f\left({\omega \mathrm{\prime}}_{i}^{\mathrm{iter}}\right)$ then
${\omega \text{'}}_{i}^{\mathrm{iter}}={\omega \text{'}}_{i}^{\mathrm{iter}}$
End
Scout bees
Randomly select N_{scout.bees} solution points within the sdimensional domain, obtain ${\omega}_{i\hspace{1em}\hspace{1em}1\le i\le N}^{\mathrm{\u2033}\mathrm{iter}}$.
If min${\left({\omega \mathrm{\u2033}}_{i}^{\mathrm{iter}}\right)}_{1\le \mathrm{iN}}<\mathrm{FBEST}$ then
FBEST = min (${{\omega \mathrm{\u2033}}_{i}^{\mathrm{iter}})}_{1\le i\le N}$
End
Set FBEST = min (ω, ω′, ω″)
If (iter > Niter_{max}) then stop.
If $\left(\left{\mathrm{FBEST}}^{\mathrm{iter}}{\mathrm{FBEST}}^{\mathrm{iter}1}\right<\epsilon \right)$ then stop.
Set iter = iter + 1
Go to Step2.
Appendix D
The algorithm to perform the Particle Swarm Optimization (PSO) minimization
Assume ${\omega}_{i}^{\mathrm{iter}}$ represent the solution for ith point at the iterth iteration. Take ${\omega}_{{\mathrm{best}}_{i}}$ as the best solution for ith point recorded up to the iterth iteration, and ${\omega}_{{\mathrm{velocity}}_{i,j}}$ as the velocity of the ith point in the jth dimension, with the length Δ_{j}. Also, consider FBEST as the best (minimum) value of the function, f (recorded up to the iterth iteration), and U as the random uniform distribution. The PSO optimization algorithm is mentioned below.
Step0 Initialize
Input the number of initial points, N, the required relative tolerance of solution as a stopping criteria, the maximum number of cycles to be performed, Niter_{max}, and the three coefficients in the velocity equation (Eq. (1)), coef_{1}, coef_{2}, coef_{3}.
Step1 Find initial minimum
Set iter = 1
Generate N solution points, ${\omega}_{i\hspace{1em}1\le i\le N}^{\mathrm{iter}}$, by (uniform) random selection within the sdimensional domain.
Set FBEST = min (f (${\omega}_{i}^{\mathrm{iter}})$ for i = 1, …, N.
For i = 1, …, N
${\omega}_{{\mathrm{best}}_{i}}={\omega}_{i}^{\mathrm{iter}}$
For j = 1, …, s
${\omega}_{{\mathrm{velocity}}_{i,j}=U({\Delta}_{j},\hspace{1em}+{\Delta}_{j})}$
Step2 Repeat
For i = 1, …, N
For j = 1,…, s
rand_{1} = U(0,1); rand_{2} = U(0,1)
$${\omega}_{{\mathrm{velocity}}_{i,j}}^{\mathrm{iter}}={\mathrm{coef}}_{1}\times {\omega}_{{\mathrm{velocity}}_{i,j}}^{\mathrm{iter}}+{\mathrm{coef}}_{2}\times {\mathrm{rand}}_{1}\times ({\omega}_{{\mathrm{best}}_{i,j}}={\omega}_{i,j}^{\mathrm{iter}})+{\mathrm{coef}}_{3}\times {\mathrm{rand}}_{2}\times ({\omega}_{F{\mathrm{BEST}}_{i,j}}={\omega}_{i,j}^{\mathrm{iter}})$$
${\omega}_{i}^{\mathrm{iter}}={\omega}_{i}^{\mathrm{iter}}+{\omega}_{{\mathrm{velocity}}_{i}}^{\mathrm{iter}}$
If $f\left({\omega}_{i}^{\mathrm{iter}}\right)<f({\omega}_{{\mathrm{best}}_{i}})$ then
${\omega}_{{\mathrm{best}}_{i}}={\omega}_{i}^{\mathrm{iter}}$
If $f\left({\omega}_{{\mathrm{best}}_{i}}\right)$ < FBEST then
FBEST = $f\left({\omega}_{{\mathrm{best}}_{i}}\right)$
${\omega}_{\mathrm{FBEST}}={\omega}_{i}^{\mathrm{iter}}$
If (iter > Niter_{max}) then stop.
If $\left(\left{\mathrm{FBEST}}^{\mathrm{iter}}{\mathrm{FBEST}}^{\mathrm{iter}1}\right<\epsilon \right)$ then stop.
Set iter = iter + 1
ω^{iter} = ω^{iter−1}
${\omega}_{\mathrm{velocity}}^{\mathrm{iter}}={\omega}_{\mathrm{velocity}}^{\mathrm{iter}1}$
Go to Step2.
All Tables
An instance of the initialisation numbers for generating Sobol’s LDS, up to a dimension of 32 (Jäckel, 2002).
All Figures
Fig. 1 The relative permeability curves used in the fine model. 

In the text 
Fig. 2 The permeability map for the (a) finegrid and (b) coarsegrid models. 

In the text 
Fig. 3 The porosity map for the (a) finegrid and (b) coarsegrid models. 

In the text 
Fig. 4 The snapshots of saturation map for (a) horizontal displacement in coarse grid, (b) horizontal displacement in fine grid, (c) vertical displacement from injection to production well in fine grid and (d) vertical displacement from injection to production well in coarse grid. 

In the text 
Fig. 5 The estimated oil production rates versus time. 

In the text 
Fig. 6 The estimated reservoir pressures versus time. 

In the text 
Fig. 7 The estimated watercuts versus time. 

In the text 
Fig. 8 The estimated pseudo relative permeability curves versus the rock curves. 

In the text 
Fig. 9 The mechanism of objectivefunction minimization in the numerical methods adopted. 

In the text 
Fig. 10 The runtime of the numerical methods adopted. 

In the text 
Fig. 11 Maximum error encountered in each numerical method, compared against the finegrid results. 

In the text 