A few months ago I started a fun project to develop a low-cost superficial vein finder as an android smartphone accessory. I had already identified that I need a camera without an IR filter, which is compatible with Android because the built-in camera of a typical smartphone completely cuts the near IR radiation.
After researching for several days I bought a night vision camera which supports V4L2 driver, from eBay. I was able to get the video stream from the camera to openCV on NDK in my Android mobile thanks to the
saki's code for accessing UVC camera from Android. The next challenge was to find an image enhancement method to process the vein images in real-time. The enhancement should normalize the illumination and enhance the contrast of superficial veins in real-time.
I implemented several illumination normalization algorithms and adaptive contrast enhancement algorithms by referencing several credible pieces of literature but none of them gave the expected result.
After spending many hours, I got an idea for a new contrast enhancement algorithm which is based on the linear contrast stretching which I learned for image processing in my university. The idea is extremely simple as it only applies a Gaussian smoothing and linear contrast stretching with a small modification. It took only 10 minutes to implement the new algorithm in c++ with openCV but the result seemed better. The new algorithm was named as Speeded-Up Adaptive Contrast Enhancement (SUACE). Then I transferred the code to the Android vein finder project. The application was tested by installing in my Galaxy Note5 and the new algorithm processed the frames at the rate of 28 fps.
Since I felt that the algorithm is fresh and the results are better, wrote a research paper and submitted to ICIIS2017 conference. Mr. H. Kulathilake from the Rajarata University also contributed to preparing this paper. Also, I tried applying the same algorithm to enhance the blood vessel structure of retinal fundus images and found the result is extremely better than the existing enhancement algorithms (
See the comparison). The fact was confirmed by a medical doctor; Dr. M. P. Giragama. Since the paper submission deadline of ICIIS2017 was extended by 15 more days, I wrote another paper by including the results of retinal fundus image enhancement and segmentation with a slight modification to the
Tyler Coye's algorithm and submitted to the same conference. Hopefully, the both papers were accepted with good comments from the reviewers.
The Adaptive Contrast Enhancement Algorithm
In the conventional contrast stretching algorithm, a given range of intensities is mapped into a wider range by using a linear or piecewise linear transformation function. However, the given range (reference range) of the intensities is static for the whole set of pixels in the image. The idea of the novel algorithm is to change the reference range based on the illumination around the pixel.
It is a known fact that the illumination can be extracted by filtering the lower frequency components of an image. The lower frequency components can be obtained by using a smoothing filter to the image without transforming it to the frequency domain.
In this new algorithm, the illumination is calculated by convolving the image with a Gaussian kernel. Then the reference intensity range of the contrast stretching algorithm is shifted based on the corresponding value from the image smoothed by convolving with the Gaussian kernel. The following figure shows the height map (the height is proportional to the intensity) of the original image I(x,y) and its Gaussian smoothed version g(x,y).
|
Height map of the original image I(x,y) and the Gaussian smoothed image g(x,y) |
The red marks in the above image indicate the reference range of intensities which is to be stretched. The mid point of the range is similar to the g(x,y) value where the lower and upper boundaries are a(x,y) and b(a,y). There are d number of intensities in the reference range. The typical contrast stretching function is given in the below figure.
In this algorithm, the a and b are shifted along the reference intensity axis based on the value of g(x,y) keeping the width of reference range d fixed. The following equations explain how to obtain the a(x,y), b(x,y) and the contrast enhanced image I'(x,y).
Sample Images Enhanced by SUACE
|
Original Image |
|
Enhanced with SUACE |
|
Original Image |
|
Enhanced with SUACE |
The following image shows the performance comparison of different contrast enhancement algorithms applied to a retinal fundus image with their blood vessel segmentation outputs.
You can see how accurate the performance of SUACE algorithm by means of illumination normalization and vessel structure enhancement in the above image. I used a Hough line detector to reconstruct the missing parts of the vessels structure due to the global thresholding in the conventional Tyler Coye algorithm.
More details can be found in the two research papers which will be available in IEEE Xplore digital library in near future. But you can download the SUACE implementation in C++/OpenCV from here.
References
If you find this work helpful please cite the following articles.
@inproceedings{bandara_super-efficient_2017,
address = {Peradeniya, Sri Lanka},
title = {Super-Efficient Spatially Adaptive Contrast Enhancement Algorithm for Superficial Vein Imaging},
booktitle = {2017 IEEE 12th International Conference on Industrial and Information Systems ({ICIIS}) ({ICIIS}'2017)},
author = {Bandara, Randitha and Kulathilake, Hemantha and Giragama, Manjula},
month = dec,
year = {2017},
keywords = {Adaptive Contrast Enhancement, Illumination Normalization, Mobile Computing, Real-time Vein Imaging}
}
@inproceedings{bandara_retinal_2017,
address = {Peradeniya, Sri Lanka},
title = {A Retinal Image Enhancement Technique for Blood Vessel Segmentation Algorithm},
author = {Bandara, Randitha and Giragama, Manjula},
month = dec,
year = {2017},
keywords = {Adaptive Contrast Enhancement, Illumination Normalization, Mobile Computing, Real-time Vein Imaging}
}
CodeProject