blog




  • Essay / Reducing Underwater Noise Using Different Techniques

    Table of ContentsObservation ModelStatistical Assumptions for NoiseTime-Frequency DomainWiener Filtering (WF)Wavelet Thresholding (WT)Adaptive Wiener FiltersReduce Underwater Noise water for the acoustic signal using different technologies. There are Wiener filters, adaptive filters and specified wavelet thresholds. Underwater acoustic telemetry exists in applications such as data collection for environmental monitoring, communication with and between manned and unmanned underwater vehicles, transmission of divers' speech, and more. Reduce noise underwater for acoustic signal. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”?Get the original essaySound Propagation LossesSelf-noise and Ambient Noise, Acoustic Communications NARS is an active area of ​​research with significant challenges to overcome, including in horizontal, shallow environments. water channels. Due to the expansion of people's activities in the ocean, the field of underwater acoustics has been widely developed in a variety of applications, including acoustic communication, detection and localization of surface objects and underground, sounders and sub-surface profiling for seismic operations. exploration.A noise removal algorithm based on short-term Wiener filtering is described. An analysis of the filter performance in terms of processing gain, root mean square error and signal distortion is presented. Noise hampers sonar data collection and associated processing of the data to extract information because many of the signals of interest are short in duration and relatively low energy. Passive sonar data is typically accompanied by ambient noise from shipping traffic, marine life, wave movement, ice movement and cracking (in the Arctic), and many other sources. Where x(n) is the received signal corrupted by noise, s(n) is the uncorrupted signal, and ? (n) is the additive noise. Although the signal is generally not stationary over a long observation period, for a short time interval we can write Rx (I) = Rs (I) + R? (I) Signal estimation is a classic problem in statistical signal processing, and the vector of optimal coefficients of the FIR filter (Wiener) is the solution of the Wiener-Hopf equation Rx h = Rs. Where Rx is the correlation matrix for the observed noisy signal, and rs is a vector of terms of the correlation function &R (l) of the uncorrupted signal. The pre-whitened data is segmented into blocks where an estimate of the local correlation function Rs(I) is formed for each segment. Optimal filtering is then performed for each segment using a Wiener filter designed for the segment and the data is processed by the inverse filter to undo the effects of pre-whitening. The data is first segmented and filtered and the resulting images are weighted by a triangular window. The data is then re-segmented using images shifted by half the image length, filtered again and weighted by a triangular window. The two weighted data sets are then added to produce the final result and “minimize any effects that may occur at the frame level.” boundaries between images. The figure represents the linear filtering of a signal in additive noise and indicates the two parts of the output: ys (n) the result of processing the signal alone, and y ? (n) the part due to noise processing alone, which can be considered as the residual noise left after processing. The evaluation is carried out on areal data set representative of underwater acoustic recordings. The rationales used to address the proposed evaluation are root mean square error, overall signal-to-noise ratio (SNR), segmental SNR, and root mean square error. These filters are generally designed by a calculation that involves estimating the autocorrelation of the signal, a difficult task in case of low SNR or the presence of non-stationary components. Musical noise is a perceptual phenomenon that occurs when isolated peaks remain in time-frequency representation after processing with spectral subtraction algorithm. Observation modelThe observation represents N data samples, it is denoted z[n], the noise is denoted ?[n] and a signal of interest is denoted s[n]. Thus, for each sample n = 0 … N - 1, we have Z[n] = s[n] + ?[n] As some of the proposed methods carry out observation in the time-frequency plane, it is necessary to briefly recall some useful properties. In this case, the short-term Fourier transform (STFT) of the observed signal is defined as Where w is a time window of length K, k = 0 … K - 1 and l = 0 … L - 1 are respectively the frequency and time indices. Frame overlap is defined by N01. Statistical Assumptions for NoiseNoise is considered as a locally centered WSS Gaussian process. The Gaussian distribution hypothesis is motivated by the observed similarity between the sea noise distribution and its theoretical fitting Gaussian distribution. However, note that marine noise is colored and therefore its power spectral density (PSD) is not constant, particularly in the low frequency domain. Time-frequency domain On each frequency channel k = 0 … K - 1, the Fourier coefficients of the noise? [k, l] are complex Gaussian random variables with circular symmetry, independent of S [k, l]. Thus the noise is considered stationary and its variance does not depend on time: The latest noise reduction methods have been mainly designed to reduce this phenomenon while preserving or even improving the detection of signal elements on the time-frequency representation (TFR ). Wiener Filtering (WF)In 1940, Norbert Wiener constructed a finite impulse response (FIR) filter w[n] to estimate the signal of interest s[n] from his noisy observation z[n]. This filter is designed to minimize the mean square error between the signal of interest and its estimate. We show that the coefficients of this filter are calculated by Where, r denotes the autocorrelation of s[n] and R the covariance matrix of z[n]. R is a symmetric positive semi-definite matrix, and therefore invertible as long as the variance z[n] is non-zero. Wavelet Thresholding (WT) The sixth is the famous Donoho wavelet thresholding method, operating to remove the noisy part of the wavelet coefficients. . The first step is to calculate the discrete wavelet transform (DWT) of the signal with the multi-resolution algorithm. To do this, a bank of filters is constructed from a given mother wavelet ?(t) such that Where j is the scale parameter and k is the offset parameter. For the evaluation, a Daubechies wavelet of order 6 is used to calculate the DWT. The second step consists of thresholding, in the wavelet domain, by shrinking the coefficients wj,k with the soft thresholding method described in: Where N is the number of samples and sj the standard deviation of the noise at scale j . The MDF method produces slightly better performance at low SNR and colored noise. The authors SS Murugan studied the real-time data collected from the Bay of Bengal in Chennai by implementing the estimation methods of Welch, Barlett and Blackman and improved the maximum signal to noise ratio to 42-51 dB. Sourcesinclude geological disturbances, nonlinear wave interaction, turbulent wind pressure on the sea surface, navigation, distant storms, seismic prospecting, marine animals, breaking waves, sea spray, rain, hail impacts and turbulence. A direct link between wind force and ambient noise level is observed for a frequency range from 500 Hz to 25 kHz. The spectrum of noise levels is summarized. Work on ambient noise spectra and sources in the ocean has observed a decrease in the dependence of underwater ambient noise on wind and sea conditions below 500 Hz. Spectral estimation plays a role important role in signal detection and monitoring. Applications of spectral estimation include harmonic analysis and prediction, time series extrapolation and interpolation, spectral smoothing, bandwidth compression, beamforming, and direction finding. Spectral estimation is based on the idea of ​​estimating the autocorrelation sequence of a random process from a set of measured data and Fourier transformation to find the power spectrum estimate. Bartlett MethodThe Bartlett method is also known as periodogram averaging. In this method, the input sequence x(n) of length N is divided into K non-overlapping sequences of length L such that N = KL. Bartlett's estimate is given by: Welch method The Welch method is also known as the modified periodogram. Welch proposed two modifications to Bartlett's method. The first is to allow the sequence xi(n) to overlap and the second is to allow a data window w(n) to be applied to each sequence. The estimate produced by the Welch method is given by: Blackman-Tukey Method The Blackman-Tukey method is known as periodogram smoothing. This estimate smoothes the periodogram by convolution with the Fourier transform of the autocorrelation window [W(ej?)]. The Blackman-Tukey spectrum is given by: Adaptive filtering algorithm Many efficient computer algorithms for adaptive filtering have been developed. They are based either on a statistical approach, such as the least mean squares (LMS) algorithm, or on a deterministic approach, such as the recursive least squares (RLS) algorithm. Adaptive noise cancellation techniques are used to mitigate unwanted noise effects. LMS Algorithm The LMS algorithm is a member of the stochastic gradient algorithms. The recursive relation to update the tap weight vector is given by: W (n +1) = w (n) + µx (n) e * (n) Here x (n) is the filter input, e(n) is the error signal and µ is the step size. At each iteration, this algorithm requires knowing the most recent values ​​u(n), d(n) and w? (n).NLMS AlgorithmThe term normalized is because the adjustment applied to the tap weight vector at iteration n +1 is "normalized" with respect to the squared Euclidean norm of the tap input vector x (n) at iteration n. NLMS differs from LMS in the way the weight checker is mechanized. The recursive relationship for updating the tap weight vector is given by: RLS Algorithm The adaptive recursive least squares (RLS) filter is an algorithm that recursively finds filter coefficients that minimize a least cost function Weighted linear squares relative to the input signals. This contrasts with other algorithms such as least mean squares (LMS) which aim to reduce the mean square error. RLS exhibits extremely fast convergence. However, this advantage comes at the cost of high computational complexity and tracking performance..