Proposal Research Implementation of Algorithms Testing and debugging of algorithms GUI development Presentation and Demo Final Paper proposal for Reconstruction of Signals from Irregularly Sampled Data submitted to fulfill the semester project requirement for EE 4773/6773: Digital Signal Processing October 5, 1997 submitted to: Dr. Joseph Picone Department of Electrical and Computer Engineering 413 Simrall, Hardy Rd. Mississippi State University Box 9 571 MS State, MS 39762 submitted by: Vamsidhar Juvvigunta, Balakrishna Nakshatrala and Nagamohan Kompella Signal group The Institute for Signal and Information Processing Department of Electrical and Computer Engineering Mississippi State University Box 6176 Mississippi State, Mississippi 39762 Tel: 601-325-8335 Fax: 601-325-3149 email: {vamsi, balki, mohi}@ERC.MsState.Edu I. ABSTRACT This project aims at the implementation of some of the available algorithms for the reconstruction of irregularly sampled, band-limited signals. These algorithms include the Adaptive Weights Method (ADPW), the Projection onto Convex Sets (POCS) based method, and the Wiley/Marvasti (WILMAR) method. This work deals with one dimensional signals. The algorithms will be tested on synthetic data generated by a Computer. The algorithms will then be compared based on their speed of convergence and robustness. We also study the influence of the auxiliary parameters specific to each method. II. INTRODUCTION The sampling problem is one of the standard problems in signal analysis. Since a signal f(x) cannot be recorded in it's entirety, it is sampled at some regular or irregular time intervals from the continuous signal. The reconstruction of regularly sampled signals using sinc type interpolation is a well understood problem in Digital Signal Processing. However, sometimes, there might be a good reason to sample a signal irregularly. In which case, the reconstruction is a little different. The reconstruction of irregularly sampled data is usually done by iterative methods. The few direct methods (the direct version of the POCS) that exist are acceptably fast for small data sets but become computationally very expensive for large data sets. The problem of irregular sampling and the subsequent reconstruction of the signal is not a contrived one. On the contrary, irregular sampling is a very common problem. For example, when a signal is transmitted there will always be a loss of information when sampling it in the receiver. This may lead to the loss of certain samples, leading essentially to irregular sampling (due to data loss). Also, sampled signals stored on some magnetic media may deteriorate over time and lose some samples. In all these cases, we have either irregularly sampled signals, or regularly sampled signals that have missed some samples, rendering them essentially irregularly sampled. Another field where nonuniform sampling and reconstruction may be of use is in (adaptive) data compression of signals, using adaptive non uniform resamplers. Consider a Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) where a significant portion of the dataset has no information. It is of benefit if the CT or MRI image is resampled irregularly with low frequency sampling in areas with no or minimal signal change and with higher frequency sampling in regions of interest. This resampled image will use significantly less storage space and can be reconstructed completely from the irregular samples. Our main motivation, however, is to do with the idea of understanding reconstruction of irregularly sampled signals in 1-Dimension, so that we can later extend it to reconstruction in 2-Dimensions. This problem of 2-D reconstruction of irregularly sampled signals arises when we think of signal processing on the solution of a flow field on an irregular grid, which is a part of the master theses' work of two of us. III. ITERATIVE RECONSTRUCTION ALGORITHMS Most of the Iterative algorithms can be considered as alternating mapping methods using the given irregular spaced sampling information about the signal f and the information about the spectral support of f. Given the samples , each of these methods construct an auxiliary signal which is then low-pass-filtered to obtain an approximation . The difference between the sampling values and the approximation then goes into the next step of an iterative procedure, which allows the recovery of the band limited signal completely. Actually the following iterative two-step algorithm is typical for most of the methods. 1. As the first step an auxiliary signal is constructed from the given sampling values of f by an approximation operator A. A is a linear operator, except in the case of Projection onto Convex Sets (POCS) method. The resulting function Af can be either a step function or a piece-wise linear interpolation or as well chosen discrete measure. It is important that only the sampling values of f are required to construct Af. 2. Here, the signal is projected into the space by an orthogonal projection , in order to kill the high frequencies of Af. Of course this projection can be described as a low pass filter, the transfer function being the indicator function of the set , i.e. (1) We can give alternative description of f by interpreting the projection of f onto as convolution of f with a sinc function Sinc is the inverse fourier transform of the indicator function , given by (2). After these two steps, the first cycle is finished and we obtain an approximation signal, The next iteration starts again with constructing a new auxiliary signal A by the operator . Let us denote the difference by . We have to add to filtered A to obtain . Hence the th iteration can be described by (3) Where is the unfiltered auxiliary signal constructed from the "sampling error" with . A reformulation of (3) provides (4) Then the error estimate is given by IV. The Adaptive Weights Method In this case the approximation operator is of the form (5). The weights are established in an adaptive way, depending on the sampling geometry. A good way to determine the `s is given by the "nearest neighborhood method ": (6) Here, the `s are related to the `s of the Voronoi method, since . The essential effect of the adaptive weights algorithm is, to attach lower importance to those sampling points which are in the regions of higher sampling density, and vice versa to lend higher weight to those sampling points, which are more isolated. We get the best results by using voronoi weights because they directly reflect the sampling geometry. The theory says that convergence for Adaptive weights algorithm is guaranteed, is the sampling sequences satisfies the Nyquist criterion, i.e. maximal gaps between the samples are smaller than the Nyquist rate. V. The Wiley/Marvasti Method The Marvasti approach has its origin in Wiley's Natural sampling method and makes use of the formula (f = f * sinc) and the interpretation of the shift operator as convolution with Dirac measures. Thus the Marvasti approximation operator Af is nothing but a discrete measure of the form (7) and the first approximation signal can be obtained by convolving Af with the Sinc function. Hence (8) Often, the speed of the convergence of the algorithm increases rapidly, if one multiplies (6) by a global relaxation parameter and we obtain (9) The speed of convergence depends on the choice of . If the relaxation parameter is too large, the iteration will diverge, on the other hand a small value of brings slow convergence but stable. A good choice of is (10) = where n is the length of the signal f and p is the number of sampling points. The value is a safety factor, which helps to prevent divergence. For highly irregular sampled signals one can enforce convergence by choosing a larger value for . But that would have a detrimental effect on the rate of convergence. The Marvasti method is a special case of the adaptive weights method, if one determines all the weights to be equal (to ). VI. PROJECTION ONTO CONVEX SETS (POCS) METHOD This is a recursive method for finding a point in the intersection of p given closed convex sets. (a set is said to be convex if for any two points , the whole line between and belongs to ). Consider the Hilbert space with norm of all square integrable functions, and a convex set . For any , the projection of onto is by definition the closest neighbor to in . If is closed and convex, exists and is uniquely determined by and from the minimality criterion. (11) This rule defines the (in general) non-linear operator . The central idea of the POCS methods is as follows. Nearly most data related to an unknown signal can be viewed as a constraint that restricts to lie in a closed convex set in (in our case such a property is given by the sample values). Thus for known properties there are closed convex sets and must lie in the intersection (12) The problem is then to find a point in given the sets and projection operators projecting onto for . The convergence properties of the sequence generated by the recursion relation (13) are based on fundamental proofs in functional analysis. An important example for is the set of all square integrable functions denoted by , where the fourier transform of vanishes outside which is a closed linear subspace of . For a given sequence of samples of an unknown function we form the sets. (14) In words, is the set of band limited functions whose values at the sampling points coincide with the values of the sampled function. can be described as . The projection of an arbitrary function onto is given by (15). This clearly satisfies . 1. Iterative Method In the iterative description the information of the sampling values is used step by step as a correction term, and at each step a certain multiple of the shifted filter is added to the approximation. With defined explicitly as above, the iterative reconstruction algorithm becomes (16) with represents the -th estimate of a band limited function consistent to all sampling values, is the one cycle improvement over , and is the initial estimate, which represents a first approximation of the unknown function . The algorithm converges to a point in the set (17) which is in general the original signal . VII. IMPLEMENTAION The program will be written in C/C++. It will essentially implement the three above-mentioned algorithms. The following is a list of the input and output parameters for the methods. Input Parameters · Number of sampling points · Length of the signal · Maximum number of iterations · Filter · Threshold for precision of reconstruction · Algorithm specific parameters like the Relaxation parameter, Weights, etc. Output Parameters · Reconstructed signal · Error at each level of iteration · Total time taken · Number of iterations for convergence VIII. PROJECT DEMONSTRATION The demonstration of the project will involve the reconstruction of some irregularly sampled data sets. Some of the data sets considered will be regularly sampled data sets from which some samples have been removed. As of now the GUI is planned to be written in Tcl/Tk, which will be portable across any X windows based operating system. However, if the need should arise, the GUI may be rewritten using other software tools or languages. A preliminary view of the GUI is shown below. The final GUI may differ considerably from what is shown here but will retain and possibly add to the functionality of the GUI shown. The GUI will allow the user to choose from one of the three methods for reconstruction of a given discrete time signal. It will also be a graphing tool to visualize the signal at any stage of reconstruction. If the sampled discrete time signal is synthetic, and the actual signal is available, the GUI can show the error signal, as the difference signal between the output signal and the actual signal. The menu items will allow the selection of input files for the processing. The options menu item will be used to set the relaxation parameters for the algorithms which will be needing them. The help menu will most probably will used only to explain the theory and the meaning of the variables in the options menu. IX. EVALUATION The evaluation of this project will involve comparing the reconstructed signal with the actual signal. For this, we will first reconstruct a regularly sampled signal. Then, it is compared with the reconstructed signal from irregularly resampling the original regularly sampled signal. Since each method employs different types of auxiliary parameters, they cannot be uniformly compared based upon the effect of these parameters.However, the influence of these parameters will form a part of the overall evaluation. The algorithms will also be compared with respect to the following criteria when used on the same data set. Speed of convergence 1. In terms of iteration 2. In terms of speed: a. Time to reach a given precision b. Precision obtained with a given time IX. SCHEDULE A schedule for the major tasks in this project are shown in Figure 1. X. REFERENCES [1] H.G. Feichtinger and K. Gröchenig. Theory and practice of irregular sampling. In Benedetto J. and Frazier M., Wavelets: Mathematics and Applications, CRC Press, pages 305-363, 1993. [2] H.G. Feichtinger, C.Cenker, and M.Herrmann. Iterative algorithms in irregular sampling: A first comparison of methods. In Conf. ICCCP`91, March 1991, Phoenix, AZ pages 483-489, 1991. [3] H.G.Feichtinger and C.Cenker. Reconstruction algorithms for discrete non-uniform sampled band-limited signals. In 15.ÖAGM Conf., ÖCG, volume 56, pages 51 61, May 1991. [4] T.Strohmer. Irregular sampling, frames and pseudo-inverse. Masters Thesis, University of Vienna, 1991.