Focal deblending
Using the focal transform for simultaneous source separation
More Info
expand_more
Abstract
Nearly-simultaneous-source (blended) acquisition differs from conventional acquisition in that seismic wavefields originating from different sources are allowed to overlap in the recorded seismic traces. This allows more flexibility in deciding the number of shots, the shot density and the effective acquisition time of a survey, but it adds the complication of having to handle blended wavefields.
This thesis explores an inversion-based deblending method for wavefield separation in the marine setting. As deblending is usually an underdetermined problem, extra information in the form of additional constraints and regularization is needed to obtain a unique solution with minimal blending-noise leakage. To this end, the proposed method uses the focal transform in combination with sparsity-promoting regularization to discriminate against solutions to the blending equation that are valid, but contain excessive amounts of blending noise. The focusing operation provided by the focal transform will tend to focus the coherent signal to be extracted but will not focus equally well incoherent blending noise. Sparse solutions will tend to retain the high-amplitude focused events but not the lower-amplitude blending noise. A key feature that makes sparse solutions possible is the ability to describe curved events in a subsurface-consistent manner, using few focal domain coefficients.
The focal transform can be defined in multiple ways, using one-way or two-way wavefield propagation operators. In the implementations described in this thesis, I use a crude velocity model, based on picked normal-moveout (NMO) stacking velocities, to construct focal operators that can focus surface data onto a set of depth levels where significant reflectors are found. This choice of velocity model is suboptimal for focusing purposes, but is a pragmatic compromise, given that a more detailed velocity model may not be available at the deblending phase of the processing workflow.
In principle the focusing and defocusing operations involve the entire dataset, which makes the focal transform computationally expensive to evaluate. An investigated remedy is to use acquisition-specific subsets of the input data to split the problem in smaller chunks, combined with a suitable flavor of the focal transform and focal grid. Another method extension that I discuss is that of using a focal-curvelet hybrid transform for deblending. The main advantage is that events with linear moveout tend to be more sparsely represented in a curvelet basis. However, this comes at the cost of extra computational effort and some difficulty in balancing the contribution of the two transforms to the final solution.
I test these approaches on both synthetic and field data, with examples on towed streamer and ocean-bottom-node acquisitions. While in most cases a perceptible amount of blending-noise leakage remains present in the results, a significant amount of blending noise is suppressed. In some cases the deblending process is able to uncover weak events previously masked by strong blending noise. When the hybrid transform is used, the results show a better recovery of events that are filtered out when the focal transform is used alone. Curved near offset events are in some cases also recovered with higher fidelity compared to using the curvelet transform alone.
A significant challenge is the sometimes limited focusing for field data and synthetics as a result of trying to approximate the kinematics of complex 3D velocity models with flat-layered models and stacking velocities. The computational cost of the method is also a challenge. While working with data and focal domain subsets helps, additional measures are needed before applying focal deblending on realistically-sized field data. I make several suggestions for modifications of the method and propose extensions for future research.