List of Figures
1. Postaliasing
2. Smoothing
3. Overshoot
4. Maximum Derivative
5. Ideal Filters
6. Filter of Window 4
7. The unsampled test signal
8(a). Postaliasing for cubic filters
8(b). Postaliasing for smooth filters
9(a). Smoothing for cubic filters
9(b). Smoothing for smooth filters
10(a). Overshoot for cubic filters
10(b). Overshoot for smooth filters
11(a). Maximum derivative for cubic filters
11(b). Maximum derivative for smooth filters
12. Cubic Filters - with varying C
13. Cubic Filters - with varying B
14. Smooth Filters - with varying continuity
15. Smooth Filters - with varying error factor
EVALUATION OF GRADIENT FILTERS FOR VOLUME RENDERING
Shivaraj Tenginakai, Nivedita Sahasrabudhe, Brandon Butler, and Phanikant Mantena
NSF Engineering Research Center for Computational Field Simulation
Mississippi State University, Mississippi State, MS 39762.
{shiva,nive,brb1}@erc.msstate.edu, pkm2@ra.msstate.edu
ABSTRACT
The process of volume rendering involves the reconstruction of a continuous function and its gradients from a discrete set of samples. An interpolation filter is used for the function reconstruction, while a derivative filter is used for gradient reconstruction. In this paper we evaluate the fidelity of the gradient filter on the rendered volume.We also perform a frequency domain analysis of commonly used filters for gradient reconstruction and especially focus on three different kinds of deviations of the filters from the ideal: smoothing, postaliasing and overshoot. Suitable metrics are proposed to measure these deviations. Also, a new metric for assessing filter quality, called the Maximum Derivative. Finally, we compare the results of our theoretical analysis with the quality of the actual rendered image.
1. INTRODUCTION
2D Images can be considered to be the result of a sampling process of luminosity in the spatial domain. Similarly, volume data, generated using a CT, MRI or Ultrasound scanner measure a scalar field (absorption, magnetic resonance or scatter respectively) at each sample point. To reconstruct an image, we need the scalar values at arbitrary points in the domain and to further render the image we need the gradient values at these points.
Gradient estimation is required in volume rendering. An inaccurate estimation of the gradient will yield wrong colors and opacities along a ray and thus severely affect the quality of the rendered image [7]. Hence, the choice of the gradient filter is critical.
In this paper we try to investigate the importance of the gradient filters in determining the quality of the rendered image. This we do by using various filters for gradient reconstruction and observing the difference in the quality of the rendered image. Next, we extend the existing metrics for evaluation of interpolation filters to gradient filters. Finally, we compare the results obtained from the theoretical analysis with the images rendered previously.
Many researchers have shown that the Sinc function is an ideal interpolation filter and the Cosc function, which is the analytic derivative of the Sinc, is an ideal gradient filter [1]. These filters completely cut off frequencies above the Nyquist frequency. Because of this discontinuity, these filters have infinite window extent in the spatial domain. Therefore these filters are impractical to use for digital signals.
.
A comparative study by Marschner and Lobb [6] proposed the use of different error metrics for measuring the various reconstruction artifacts that arise from the application of interpolation filters. These metrics analyze filters in the frequency domain and measure the smoothing, postaliasing and overshoot attributes of an interpolation filter. Their study showed that a windowed Sinc exhibits the best behavior. However these filters are expensive to use because of their size. They showed that from a practical point of view Catmull-Rom filter of cubic interpolation filter family produced best results.
2. THEORY
2.1 Filter Equation
In order to reconstruct a continuous function f(t) or its derivative/gradient f'(t) from a set of sampled points fk, we convolve the sampled signal values fk, with the filter kernel w [3].
Let,
k, be an integer,
t, be the distance,
T, be the sampling distance,
f(t), be the orignal continuous signal,
fk(t), represents the samples of the orignal signal, i.e., fk (t) = f (kT),
w, be the filter kernel, which provides the weights,
frw, be the reconstructed signal,
Then, if we apply the convolution sum we get
This is the basic filter equation. If frw in Equation represents reconstruction of the orignal function f(t) from the sample values then the filter w is called an interpolation filter. If frw in Equation (1) represents reconstruction of the derivative of the orignal function f'(t) from the sample values then the filter w is called a gradient filter.
(1)
Equation (1) represents an infinite window filter, however in practice we never use an infinitely long filter. We usually define the filter extent to be M. This implies that w is zero outside the interval [M, M].
2.2 Gradient Estimation
There are numerous schemes to compute the derivatives [4]. The most common ones are summarized below:
1. Derivative First: In this approach, we first compute the derivatives at the grid points using a gradient filter d and then interpolate these gradients using an interpolation filter h.
. (2)
2. Interpolation First: In this methodology we first reconstruct the continuous function using an interpolation filter and then apply the gradient filter to obtain the gradients.
(3)
3. Continuous Derivative: In this method we convolve the interpolation filter and the gradient filter to obtain a new continuous gradient filter. We can then use this filter to construct gradients at any arbitrary point.
(4)
4. Analytical Derivative: In this method we compute the gradients by convolving the samples with the derivative of the interpolation filter
.
(5)
3. FILTER QUALITY METRICS
In order to evaluate various practical gradient filters we compare them with an ideal gradient filter. We extend the metrics developed by Marschner and Lobb for interpolation filters. We also propose a new metrics to better characterize the aliasing artifact which is not addressed adequately by Marschner and Lobb [6].
3.1 Extension of the Marschner and Lobb Metrics
Marschner and Lobb proposed three metrics to analyze a given filter with respect to an ideal filter, for three main image artifacts: postaliasing, smoothing and overshoot. An ideal gradient filter can be shown to be characterized by a line with a constant slope in the frequency domain [1].
Postaliasing arises when the energy from the alias spectra leaks through into the reconstruction, due to the filter being significantly non zero at frequencies beyond the Nyquist range.
We define the postaliasing metrics P(w), to be
(6)
where,
A is the area under the filter beyond the Nyquist region (indicated by blue shaded region in the Figure 1)
wr is the real filter kernel
Smoothing measures the attenuation of high frequency terms by the filter. Though smoothing is desirable since it attenuates noise, excessive smoothing leads to blurred images.
We define smoothing metrics S(w) to be
(7)
where,
A is the area between the ideal and the real filers within the Nyquist region (indicated by blue shaded region in the Figure 2)
wr is the real filter
wi is the ideal filter
Overshoot measures how much overshoot a given filter produces.
We define the overshoot metrics O(w) to be
(8)
where,
A is the area between the ideal and the real filers within the Nyquist region (indicated by blue shaded region in the Figure 3).
wr is the real filter
wi is the ideal filter
3.2 New Filter Quality Metric
The above metrics have some inadequacies; they measure the overall energy difference between an ideal and a real filter, but they fail to take into account the distribution of this difference. For example the filters shown in the Figures 4(a) and 4(b), will evaluate to same value for the Marschner and Lobb postaliasing metric, since the energy difference (proportional to the shaded area) is same in either case. However, perceptually a gradually varying filter, as in the Figure 4(a). is found to be more preferable than a filter with sharp variations.
To measure this important factor we propose a new metric using an norm. We call this new
metric as Maximum Derivative Metric M(w) and define it to be,
(9)
Ideally, the slope of the frequency response characteristic of a derivative filter should be constant; that is, the rate of change of slope should be zero. For practical filters, this is not true. A good practical filter though, should have a small rate of change of slope and hence, M should be as low as possible.
4. FILTER SELECTION
We analyzed two families of filters, including one that is well known in literature and another that was developed by Moller et. al. using a Taylor series expansion of the Equation (1). The following filters were employed in our analysis:
1. Ideal Interpolation and Gradient Filters: According to the sampling theorem a signal can be accurately reconstructed using an ideal interpolation filter if it is bandwidth limited and is sampled at the Nyquist rate or higher. The ideal interpolation filter is the sinc function.
(10)
Gradient is defined as the derivative of the orignal function. Ideal interpolation with Sinc will reconstruct the orignal function accurately. It follows that the gradient can be reconstructed exactly by using the derivative of the Sinc function. Thus the ideal gradient reconstruction filter is analytic derivative of the ideal interpolation filter. The ideal gradient filter is defined as [1]:
(11)
The frequency domain representation of the ideal filters is shown in the Figure 5.
The ideal filters are defined over an infinite spatial interval, and therefore cannot be used in practical applications. However, practical filters are compared against the ideal filters for evaluation.
2. Cubic Interpolation Filters: The two parameter family of cubic filters, with parameters B and C, defined by the following set of equations[6]:
(12)
This family includes well known B-spline(B=1, C=0), Catmull-Rom Spline (B=0, C=0.5) and Cardinal splines (B=0 and C = ).
3.Cubic Gradient Filters: These class of filters are obtained by differentiating the above class of filters. They are defined by the following set of equations[3]:
(13)
4. Smooth Interpolation and Gradient Filters: Using a Taylor series expansion of the convolution sum Equation (1), Moller et. al [3]. proposed a new series of smooth interpolation and gradient filters. Their derivation is explained below.
We assume that the orignal signal function f(t) has at least first N derivative, i.e. it is a member of the class of smooth functions CN. This condition is generally met in practise. This allows us to expand fk = f (kT) in Taylor series around f [3].
(15)
where, .
Substituting this Taylor series expansion into Equation(1) and reordering the terms we can rewrite Equation (5) as:
(16)
where,
(17)
(18)
(19)
(20)
The relation between the distance t, the offset , and the sampling distance T, is shown in the Figure 6.
To design an accurate filter for reconstructing the k-th derivative, we employ the following conditions on the coefficient , defined by the Equation (7):
Condition 1:
(21)
Condition 2:
(22)
These two conditions define an N-EF class of filters (that is, Error function is of the Nth order), that reconstruct the k-th gradient. After applying these two conditions we get a piecewise filter kernel.
For an interpolation filter an application of these conditions reduces the reconstruction function in Equation (7) to , thus the reconstruction function reconstructs the orignal function. Similarly for the first order gradient filter we get .
The next condition arises from the fact we want the filter kernel w and the reconstructed function frw, to be part of same smooth function space CM [5]. This is necessary to obtain a continuous reconstructed function else the image may have undesirable artifacts. This leads to condition 3:
Condition 3: (23)
To satisfy this condition we represent filter w, by a basis smooth function of CM space, Equation (11). An obvious choice for basis function are polynomials since polynomials are members of .
(24)
Once we have selected a basis function we apply conditions 1 and 2 and solve for the constants involved in the polynomial. If the above set of conditions lead to a under determined system we treat the unsolved constants as the design parameters.
By varying M and N and fixing k we get a family of filters that accurately reconstruct the k-th derivative, within an error factor of degree N.
The choice of number of filter weights is rather arbitrary. If we chose too many, the resulting filter becomes computationaly inefficient. If too few are chosen the above equation system might not lead to a solution at all.
5. EVALUATION
In order to render volume data we had to select an interpolation filter, a gradient filter and a gradient estimation methodology, Equation (2) to Equation (5). Since we were mainly interested in the gradient filters, we kept the interpolation filter and the gradient estimation methodology constant and varied the gradient filter.
We fixed our interpolation filter to Catmull-Rom, since this filter is found to be the best practical reconstruction filter [6]. For gradient estimation methodology we chose the method of analytical derivative Equation (5), since this method is computationaly most efficient [4].
For evaluation of the various gradient filters, we adopted the strategy followed by Marschner and Lobb [6]. The use of a real dataset may lead to erroneous evaluation results, since the dataset may contain noise. Hence, we used a synthetic database generated using the following test signal:
(25)
The signal was sampled on a 40 x 40 x 40 lattice in the range with fM = 6 and . It can be shown that a significant amount of the energy of the function lies near the Nyquist frequency, making this signal very demanding on reconstruction filters. The Figure 7 shows the image of the test volume for an isosurface
Next, these gradient filters were analyzed using the proposed metrics Equation (6) to Equation (9). The results of the analysis were then be compared with the images obtained previously.
6. RESULTS
In the experiments, for both the families of filters, we varied one parameter (B and C for cubic filters and continuity and error factor for smooth filters) while keeping the other fixed and evaluated the metrics. The results of the evaluation are given below:
6.1 Postaliasing
Cubic Filters: The filter B=0.4, C=0.5 was found to have minimum postaliasing, while B=0, C=0 had the maximum value. The variation of this metric with variation in B and C is shown in Figure 8(a). As we increase C and keep B constant at 0, the postaliasing decreases, attains minimum at C=0.5 (Catmull - Rom) and then increases, Figure 8(a). The images in Figure 12 support this. The degree of scalloping of the crests decreases from C=0 to C=0.5 and then increases when C=1.0.
Smooth Filters: Postaliasing decreases with increase in continuity, though not monotonically. The variation of postaliasing with error factor follows a similar trend, as is shown in figure 8(b). The amount of postaliasing as measured by our metric indicated that for an error factor of 2, the filter with a continuity of 2 would be the best, followed by those with continuities of 3, 1 and 0, respectively. This observation is supported by the images in Figure 14.
Comparatively, the family of smooth filters performs far better on this metric. In fact, the least value of postaliasing for the cubic filters is an order higher than the value of postaliasing for nearly half the members of the family of smooth filters.
6.2 Smoothing
Cubic Filters: For lower values of B (0.0 to 0.2), smoothing decreases with increase in C, while for higher values it initially decreases and then increases. A similar trend is observed for increasing B as shown in Figure 9(a). The images in Figures 12 and 13 confirm these results. As we increase C and keep B constant, the waves have sharper ripples, Figure 12.
Smooth Filters: Smoothing increases with continuity and with error factor. There is a sharp increase in going from a continuity of 0 to 1 and an error factor of 2 to 3. Beyond these the graph saturates as shown in Figure 9(b). The images in Figures 14 and 15 confirm this measurement. The image for C(continuity)=1 and EF (error factor)=2 is smoothed higher than image for C=0 and EF=2.
As a family, smooth filters perform better as smoothness ranges between 1.074 to 1.605 while it ranges between 1.420 and 1.994 for the cubic filters.
The images in Figure 12, Figure 13, Figure 14 and Figure 15, support this measurement. The difference between images obtained using various smooth filters is far less than the images obtained using various cubic filters.
6.3 Overshoot
Cubic Filters: For a given B, the overshoot always increases with increase in C. Whereas for a given C, it decreases with increase in B, until the tail end where it increases a little. This is evident from Figure 10(a)
Smooth Filters: The overshoot generally decreases with continuity, though not always monotonically. For a given error factor, the values are the lowest for a continuity of 3. As shown in Figure 10(b), for a given continuity, the variation with error factor does not follow a regular trend. The filter with a continuity of 3 and zero error factor has the least overshoot.
The cubic filters perform better on this metric. As is evident from Figure 10(a) and Figure 10(b), the highest value of Overshoot recorded for cubic filters is less than the least value recorded for the smooth filters.
We expected overshoot to predict discontinuity in the images, but since the test signal was a continuous function, none of the images could be used to verify this metric.
6.4 Maximum Derivative
Cubic Filters: The filter B=0, C=0.5 has the least value of maximum derivative (M). For a given C, M always increases with B. But for a given B, M decreases with C initially and then increases, Figure 11(a). The images in the Figure 12 and Figure 13 support these measurements. Aliasing decreases as C decreases, attains minimum at C=0.5 and again increases, for a constant B. In Figure 12, the patches on the surface vary in similar manner.
Smooth Filters: Figure 11(b) gives the overall picture of variation of M with both the error factor and the continuity. The best values are obtained for an error factor of 3. This is validated by the corresponding image in Figure 15, the aliasing decreases as we increase the error factor and then becomes constant.
As a family, the cubic filters show a smaller maximum rate of variation of slope. But in this family, a lower percentage of the filters have very low values of M, while nearly half the family of smooth filters has a low value of M.
7. CONCLUSIONS
According to the design criteria for smooth filters, the filters w of the class CM, can be decomposed into a polynomial of Mth degree and a remainder term. Now, the polynomial of Mth degree translates into a function defined as in frequency space. This guarantees a quick decay of the filter. The higher the smoothness condition, the quicker the decay. Thus the aliasing effects of the designed filter should diminish with increasing M (continuity). Our results proved this expectation to be true.
In most cases we found that the results of our evaluation tallied with the image quality. For example, for B held constant at 0, it is evident from Figure 8b. as well as the corresponding images in Figure 10b. that smoothing decreases with increasing C.
Most importantly, our new metric was able to explain the difference in image quality for two filters which measured nearly equal on the smoothing metric.
We were not able to validate the results obtained for the overshoot metric, since our test volume did not contain sharp discontinuities.
7. ACKNOWLEDGMENTS
We are thankful to Drs. Klaus Moller, Raghu Machiraju and Torsten Moller for helping us in understanding and implementing smooth filters. We are grateful for Dr. Joe Picone who equipped us with the fundamentals of DSP. We would also like to thank Mr. Aravind Ganapthiraju for helping us with integration of code.
8. REFERENCES
[1] M. J. Bentum, T. Malzbender and B. B. Lichtenbelt, "Frequency Analysis of Gradient Estimators in Volume Rendering," IEEE Transactions on Visualization and Computer Graphics, vol. 2, no. 3, pp. 242-254, September 1996.
[2] T. Moller, R. Machiraju, K. Mueller and R. Yagel, "Classification and Local Error Estimation of Interpolation and Derivative Filters for Volume Rendering," Proceedings of the Symposium on Volume Visualization, pp. 71-78, San Francisco, CA, USA, October 1996.
[3] T. Moller, R. Machiraju, K. Mueller and R. Yagel, "Evaluation and Design of Filters Using a Taylor Series Expansion," IEEE Transactions on Visualization and Computer Graphics, vol. ITVCG-3, no. 2, pp. 184-199, June 1997.
[4] T. Moller, R. Machiraju, K. Mueller and R. Yagel, "A Comparison of Normal Estimation Schemes," Proceedings of Visualization'97, pp. 19-26, Phoenix, AZ, USA, October 1997.
[5] T. Moller, R. Machiraju, K. Mueller and R. Yagel, "Design of Accurate and Smooth Filters for Function and Derivative Reconstruction," Proceedings of the Symposium on Volume Visualization, pp. 200-209, Chapel Hill, NC, USA, October 1998.
[6] S.R. Marschner and R.J. Lob, "An Evaluation of Reconstruction Filters for Volume Rendering," Proceedings of Visualization'94, pp. 100-107, Washington D.C., USA, October 1994.
[7] R.A. Derbin, L. Carpenter and P. Hanrahan, "Volume Rendering," Computer Graphics, vol. 22, no. 4, pp. 51-58, August 1988.
Figure 1. Postaliasing
Figure 2. Smoothing.
Figure 3. Overshoot.
Figure 4. Maximum Derivative
Figure 5. Ideal Filters.
Figure 6. Filter of Window 4.
Figure 7. The Unsampled Test Signal.
Figure 8.
(a)
Cubic Filters
Figure 8.
(b)
Smooth Filters
Figure 9
(a)
Cubic Filters
Figure 9
(b)
Smooth Filters
Figure 10
(a)
Cubic Filters
Figure 10
(b)
Smooth
Filters
Figure 11
(a)
Cubic
Filters
Figure 11
(b)
Smooth Filters.
Figure 12. Cubic Filters - with varying C.
Figure 13. Cubic Filters - with varying B.
Figure 14. Smooth Filters - with varying continuity.
Figure 15. Smooth Filters - with varying error factor.