Saturday, February 14, 2015

Decorrelation Stretching

Decorrelation stretch is used to enhance color differences in images with high interchannel correlation. Therefore it allows us to see details that are otherwise not so obvious or invisible to the human eye.

I first came across decorrelation stretch in a Matlab webinar and was curious to find out how it works. From Matlab online help [1] and some other online resources [2, 3] I learned the theory and implemented it in C++ using OpenCV. You can find a very nice application of decorrelation stretching to uncover rock art here.

The theory is well explained in [2, 3], so I won't go into details but provide a brief explanation.

To remove interchannel correlation in input data, we project it to a space that removes this correlation. We can find such space by performing PCA on input data. We'll then scale the projected data in the new eigenspace and project it back to the original space and stretch it. The process is illustrated in the figure below for a 2-D dataset.



The operation can be expressed as a pointwise linear transformation:

y = St RT Sc R (x - μ) + target_μ

x : n x 1 input data point.
y : n x 1 output data point.
μ : n x 1 mean of the input data.
R : n x n rotation matrix formed by the eigenvectors found from PCA. R (x - μ) projects zero mean input data to the new eigenspace.
Sc : n x n scaling matrix which is a diagonal matrix. Elements in the main diagonal are of the form 1/σi, where σ is the standard deviation of the corresponding variable in the new eigenspace. Multiplication by Sc makes variables in eigenspace have unit variance.
RT : RT projects data back to original space (RT = R-1, since rotation matrix satisfies RT R = I). Reprojected data will still have unit variance.
St : n x n stretching matrix which is a diagonal matrix. Elements in the main diagonal are of the form target_σi, where target_σ is the desired standard deviation for the corresponding variable in the input data space. Multiplication by St sets the variance of variables in input data space to desired variance.
target_μ : backprojected data is still zero mean. Adding target_μ shifts mean of the variables to the desired mean.

The dstretch function below takes as input a multi-channel image and two optional column vectors specifying the desired mean and standard deviation for each channel of the output decorrelation stretched image.

 
#include <opencv2/opencv.hpp>

/*
input       : p x q x n multi-channel image.
targetMean  : n x 1 vector containing desired mean for each channel of the dstretched image. If empty, mean of the input data is used.
targetSigma : n x 1 vector containing desired sigma for each channel of the dstretched image. If empty, sigma of the input data is used.

returns floating point dstretched image
*/

Mat dstretch(Mat& input, Mat& targetMean = Mat(), Mat& targetSigma = Mat())
{
   CV_Assert(input.channels() > 1);
 
   Mat dataMu, dataSigma, eigDataSigma, scale, stretch;

   Mat data = input.reshape(1, input.rows*input.cols);
   /*
   data stored as rows. 
   if x(i) = [xi1 xi2 .. xik]' is vector representing an input point, 
   data is now an N x k matrix:
   data = [x(1)' ; x(2)' ; .. ; x(N)']
   */

   // take the mean and standard deviation of input data
   meanStdDev(input, dataMu, dataSigma);

   /*
   perform PCA that gives us an eigenspace.
   eigenvectors matrix (R) lets us project input data into the new eigenspace.
   square root of eigenvalues gives us the standard deviation of projected data.
   */
   PCA pca(data, Mat(), CV_PCA_DATA_AS_ROW);

   /*
   prepare scaling (Sc) and strecthing (St) matrices.
   we use the relation var(a.X) = a^2.var(X) for a random variable X and 
   set
   scaling factor a = 1/(sigma of X) for diagonal entries of scaling matrix.
   stretching factor a = desired_sigma for diagonal entries of stretching matrix.
   */

   // scaling matrix (Sc)
   sqrt(pca.eigenvalues, eigDataSigma);
   scale = Mat::diag(1/eigDataSigma);

   // stretching matrix (St)
   // if targetSigma is empty, set sigma of transformed data equal to that of original data
   if (targetSigma.empty())
   {
      stretch = Mat::diag(dataSigma);
   }
   else
   {
      CV_Assert((1 == targetSigma.cols) &&  (1 == targetSigma.channels()) && 
         (input.channels() == targetSigma.rows));

      stretch = Mat::diag(targetSigma);
   }
   // convert to 32F
   stretch.convertTo(stretch, CV_32F);
 
   // subtract the mean from input data
   Mat zmudata;
   Mat repMu = repeat(dataMu.t(), data.rows, 1);
   subtract(data, repMu, zmudata, Mat(), CV_32F);

   // if targetMean is empty, set mean of transformed data equal to that of original data
   if (!targetMean.empty())
   {
      CV_Assert((1 == targetMean.cols) && (1 == targetMean.channels()) && 
         (input.channels() == targetMean.rows));

      repMu = repeat(targetMean.t(), data.rows, 1);
   }

   /*
   project zero mean data to the eigenspace, normalize the variance and reproject,
   then stretch it so that is has the desired sigma: StR'ScR(x(i) - mu), R'R = I.
   since the x(i)s are organized as rows in data, take the transpose of the above
   expression: (x(i)' - mu')R'Sc'(R')'St' = (x(i)' - mu')R'ScRSt,
   then add the desired mean:
   (x(i)' - mu')R'ScRSt + mu_desired
   */
   Mat transformed = zmudata*(pca.eigenvectors.t()*scale*pca.eigenvectors*stretch);
   add(transformed, repMu, transformed, Mat(), CV_32F);

   // reshape transformed data
   Mat dstr32f = transformed.reshape(input.channels(), input.rows);

   return dstr32f;
}  

The demo program below shows how dstretch was used to highlight different regions of a Landsat image.
int _tmain(int argc, _TCHAR* argv[])
{
   // rgb image
   Mat bgr = imread("14_L.jpg");
   // target mean and sigma
   Mat mean = Mat::ones(3, 1, CV_32F) * 120;
   Mat sigma = Mat::ones(3, 1, CV_32F) * 50;
   // convert to CIE L*a*b* color space
   Mat lab;
   cvtColor(bgr, lab, CV_BGR2Lab);
   // dstretch Lab data. dstretch outputs a floating point matrix
   Mat dstrlab32f = dstretch(lab, mean, sigma);
   // convert to uchar
   Mat dstrlab8u;
   dstrlab32f.convertTo(dstrlab8u, CV_8UC3);
   // convert the stretched Lab image to rgb color space
   Mat dstrlab2bgr;
   cvtColor(dstrlab8u, dstrlab2bgr, CV_Lab2BGR);
   // dstretch RGB data
   Mat dstrbgr32f = dstretch(bgr, mean, sigma);
   // convert to uchar
   Mat dstrbgr8u;
   dstrbgr32f.convertTo(dstrbgr8u, CV_8UC3);

   imshow("RGB", bgr);
   imshow("dstretched CIE L*a*b* converted to RGB", dstrlab2bgr);
   imshow("dstretched RGB", dstrbgr8u);
   waitKey();

   return 0;
} 
Input image: Philadelphia, Pennsylvania, USA image courtesy of the U.S. Geological Survey
dstretched CIE L*a*b* image converted to RGB
dstretched RGB image
Data distribution of the images
(Data distribution plotted using Matplotlib mplot3D toolkit.)

References
[1]Matlab decorrstretch documentation http://in.mathworks.com/help/images/ref/decorrstretch.html
[2]http://www.dstretch.com/DecorrelationStretch.pdf
[3]A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sensing of Environment, vol. 20, no. 3, pp. 209–235, Dec. 1986.
 


18 comments:

  1. Hi danushka dangampola I have sent you message please view your inbox thanks in advance.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. I had to change the Mat type to be able to run the code.
    cv::Mat stretch1;
    stretch.copyTo(stretch1);
    stretch1.convertTo(stretch, CV_32F);

    ReplyDelete
    Replies
    1. Can you please specify for which parameter (input, targetMean or targetSigma) you had to change the datatype?

      Delete
  4. I see that if the targetSigma is of type other than CV_32F, the code wouldn't work. Converting the stretch matrix to 32F fixes this and I've made that change.

    ReplyDelete
    Replies
    1. Hi can you send your changes to franco.amato@gmail.com?
      Thanx

      Delete
    2. Hi,
      Just create the sigma vector as a 32F and pass it to the function as shown in the test code above (repeated below).

      Mat sigma = Mat::ones(3, 1, CV_32F) * 50;
      :
      Mat dstrlab32f = dstretch(lab, mean, sigma);

      Delete
  5. Thank you Dhanushka Dangampola, It works great. It stopped me to implement the algorithm instead, I used your code. Thank you once again.

    ReplyDelete
  6. Hi Dhanushka,
    thanks a lot for sharing the documentation and the code!
    Is your code publsihed under some license like BSD? Would be great!

    ReplyDelete
    Replies
    1. Hi,
      No explicit licensing. I'm sharing it for anyone to use. Please report if you find any errors. Thanks

      Delete
  7. Hi Dhanushka

    I am trying to apply this algorithm in MATLAB with no clear result. I am doing something wrong because after I convert the image from RGB to LAB, I apply decorrstretch to the LAB image and after that I convert back. Nothing noticeable happens. Hence I am doing something wrong. Is it possible to find somewhere a MATLAB version of your code?

    Thank you
    Serban

    ReplyDelete
    Replies
    1. I'm sorry, I don't have a Matlab version of it.
      You can check the distribution of the result you get from Matlab decorrstretch function either using a scatter plot or calculating image statistics. That way you may be able to tune the input arguments of decorrstretch to get the desired result.

      Delete
  8. Hi,

    Is there any Matlab code example existing? I would like to test it.

    ReplyDelete
  9. hey thanks a lot for the information!
    I have been using decorrstretch in matlab with great results. However, i was wonder if there was a way to ignore pixels which lie outside the aoi which are black. I think these black pixels also affect the result. They are basically the NoData pixels in ArcMAP which fill the empty regions in the bbox

    ReplyDelete
    Replies
    1. Prepare a mask that indicate the regions of the image that are of interest to you, then extract these pixel values from the image and feed it to the function. Since we are only looking at channels, the spatial relationships are of no interest.

      Delete
  10. Please provide email contact. I require clarification on smoothing binary images.

    ReplyDelete