Optimizing Velocity Model Building for Modern 2D Programs in Frontier Areas – Doing More With Less

Spectrum’s Milos Cvetkovic, Chris Benson, Cesar Arias, Laurie Geiger, Paolo Esestime, Leandro Gabioli and Laura Younker evaluate how changes in the seismic industry have resulted in the evolution of the research and development that support modern 2D exploration programs. This information was presented at SEG 2018 in Anaheim, California.

Summary
As the seismic industry redefines itself, modern 2D exploration programs and the research and development that support them are evolving as well. With the necessity for more economically efficient business models, the integration of geological and geophysical workflows has proven to be a successful strategy for large scale multi-client programs in frontier basins.

Using a holistic approach in the time and depth velocity model building part of the workflow we have developed a robust methodology that gives us reliable models while reducing the timeline. Here, we highlight improvements in regional 2D seismic marine programs describing how we incorporate interpretation and non-seismic data early on in the workflow to constrain the seismic velocity models and aid further automatic velocity analysis. Examples shown are from large scale 2D projects in the deep waters offshore Brazil, Santos and Campos basins and offshore Somalia.

Deep water offshore Brazil, Santos and Campos basins
The shallow waters of both the Santos and Campos basins have been heavily explored, while the deep water can still be treated as frontier. Only coarse regional lines of 2D seismic cover this area (Figure 2). The seismic available to us consists of ~17000 km acquired in late 2016 with 12 km of offsets, 8 m source and a 15 m deep cable tow. Shot spacing is 25 m while we are continuously recording and shooting at 10 s intervals with the intention of separating individual shots during the data processing phase.


Figure 2: Map of legacy 2D surveys (black) and current 2D program (red). Speculative legacy data is on a sparse, regional grid while the new regular grid is on a 10 km interval.

The pre-processing sequence can be described as conventional and includes designature, deghosting and demulitple steps. The exception is individual shot separation, or deblending, of continuously recorded data following the steps presented by Seher and Clarke (2016). In order to reduce project turnaround time a practical use of available data is implemented where we start the initial model building immediately after designature and zero-phasing, concentrating in the shallow section above the multiples and continuous recording noise. Once the data is further processed we continue working with the mid-section and deep parts of the model.

For the PSTM velocity analysis we extended the approach presented by Esestime et al. (2016), where we incorporate all available seismic and non-seismic information to derive an initial model that we then refine with automatic picking. Compared to conventional velocity picking this workflow is saving a significant amount of time as we are covering ~17000 km of line length over an area of ~60000 km².We find that manually picked velocities during acquisition and even in the processing center in complicated geologic settings, such as salt basins, are often inconsistent in quality and will rarely tie at line intersections (Figure 3a). However, we can still use these RMS models to get an initial pre-stack migrated section that can be used in this streamlined workflow. We interpret key regional horizons, in this case water bottom and top of salt, while we use base of salt horizons from legacy PSDM projects to create another model building surface. Although the underlying grid of legacy PSDM data is composed of different vintages on an irregular and very sparse grid, regional interpretation surfaces correlating relatively well on the new grid. We use a geostatistical distribution scheme to create a smooth 3D time interval velocity model that will honor picked velocities and distribute them along these regional horizons. The velocity model will be consistent and will tie across the survey (Figures 3a and 3b). We then update these velocities with a pass of vertical semblance based auto-picking and use it to generate a new PSTM image. Next, we refine the interpretation of the top and base of salt, adjust the 3D geostatistical model and run additional passes of auto-picking on migrated CDP gathers. Auto-picking in the salt and pre-salt sections does not produce stable results even with targeted heavy preconditioning, hence these velocities will be finalized with a loop back to the PSDM models. Our new workflow significantly reduced the time line for a PSTM project as the regional interpretation does not require precise picking. Vertical semblance picking, horizon based 3D ties and auto-picking are mostly automated and are resource light processes, where parameter setting and testing takes most of the processors’ time.

For the PSDM initial model we convert the previously described PSTM model to the depth interval domain and apply a dip constrained smoother as we see post-salt velocities follow a compaction trend and salt geometry. Afterwards we proceed with conventional 2D VTI reflection tomography with a Kirchhoff based workflow to update the post-salt section. For salt interpretation we use low frequency RTM imaging. In salt basins 2D data has inherent limitations where a significant amount of energy can be considered to be out-of-plane. With its ability to image overturned beds, the RTM gives a better and cleaner image than the Kirchhoff, but there will still be salt bodies where imaging uncertainty is great. However, with 2D data it is relatively fast to run salt scenario testing and derive a salt model that will optimally image both the base and pre-salt in both acquisition directions (nominal strike and dip). With low frequency optimized RTM and multi-directional surface definition we can run a somewhat interactive interpretation scenario testing.

Once top and base of salt are finalized we process the pre-salt section. It is very common to have a limited number of wells in frontier basins which was the case for our program as well. Only one was available, and the pre-salt velocity trend derived from this well only produced high quality imaging in close proximity to the well. Elsewhere we used a 3D geostatistical model derived from legacy data and potential field interpretation of the upper crust and Moho surface. Reflection based tomography was run in parts of the survey with good pre-salt signal to noise ratio and the velocities were extrapolated across the survey using a combination of regional basement and upper crust surfaces. Figure 4 shows the final depth model.

Click to read the EXTENDED ABSTRACT here:
Optimizing velocity model building for modern 2D programs in frontier areas – Doing more with less