-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert the prediction_R to a RGB image? #8
Comments
I have the same question, have you solved it? |
no |
We're trying to use this code here and ran into the same issue, does anyone have an answer yet? I know that prediction_S and prediction_R are two four dimensional tensors and each dimension probably means each one of the 16 images in a batch, the width, the height and each one of the 3 color channels, but they seem to be transformed in some way I can't quite decipher yet . Apparently some of those transformations are undone in the JointLoss class (in models.networks.py) `
` We're currently studying this code and the related paper in my lab so I'll probably make some progress soon and will post it here for you guys if I do. My guess is it shouldn't be that complicated since they show those reflectance and shading images in the paper :) |
Keep in mind also as they stated at the start of section 5 in the paper, they deal with the images in the logarithmic domain, so maybe we also need to undo this logarithm. |
Did you solve this problem? |
I want to show the reflectance image correctlly. How to proccess the prediction_R got by
prediction_S, prediction_R, rgb_s = self.netG.forward(input_images)
?Thanks for your help!
The text was updated successfully, but these errors were encountered: