Our proprietary data conditioning is based on a well-established mathematical process. At EcoStim, we’re simply applying it in a novel way to condition seismic data to best highlight features that are relevant for shale formations. It’s analogous to a dip-filtering technique that determines the optimum geologic surface at a sample point in a 3D seismic volume. We perform the full analysis at every sample in the input volume and then output a 3D seismic volume that may be displayed or interpreted just like any other 3D seismic volume.

The first step is to transform the data volume from the original amplitude domain to the energy or envelope domain via a Hilbert transform.

Then, using a small moving sub-volume centered about the sample being analyzed, we operate on the samples. By testing, we determine the optimum vertical and lateral extent of the sub-volume. We usually begin by using 11 samples by 3 in-lines by 3 cross-lines, expanding from there until we find the optimum parameters. We try to use the smallest sub-volume that yields useful results. Although this is a multi-trace operator, we take steps to ensure that breaks existing in the data are not smoothed out. This is very important for stratigraphic formations such as shale. The technique uses multi-dimensional matrix algebra so we must illustrate the principles with a 2D case only. During application, we operate strictly in three dimensions.

We perform a three-dimensional dip scan over a specified range of dips to determine the optimum surface in the sub-volume at the sample being analyzed. Each trace in the sub-volume creates one dimension in the computational matrix, so the two-dimensional case we are illustrating creates a three-dimensional matrix. The basic 3 line by 3 cross-line window creates a nine-dimensional matrix. We have used as many as 9 lines by 9 cross-lines, thus creating an 81 dimensional matrix to be solved. Each level (time or depth sample) in the sub-volume may be plotted as an "n" dimensional vector in which the energy value for each trace at that sample is one dimension of the vector.

The number of samples in the sub-volume creates that many "n" dimensional vectors, which form a vector cluster. Ideally, all vectors would be co-linear. The deviation from this idea is because the data are affected by seismic (random and coherent) noise and "geologic" noise created by analyzing along an incorrect dip rate. Geo-Predict locates the weighted center of that cluster. This weighted center is projected back to the axis of the input trace, converted back to the amplitude domain, and output as the conditioned data. The sub-volume is then shifted down one sample and the process repeated until all samples of the volume have been analyzed.

*Left-input data, right-after Geo-Predict Data Conditioning, which has reduced noise, increasing the continuity of the reflectors to improve confidence in the interpretation. Faults, where present, are also much more sharply defined.*

**Before and After Geo-Predict Data Conditioning **

*Left curvature on input data, right-curvature after Geo-Predict Data Conditioning, which reduces “high-frequency” chatter along the horizons, thereby removing extraneous changes in dip, producing a much cleaner response. The NE-SW trending fault in the center of the image now appears to be a series of echelon faults.*

*Left-curvature on input data, right-curvature after Geo-Predict Data conditioning, which has greatly reduced the number of spurious events, permitting subtle trends to become obvious.*