Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using existing blurred image as input #3

Closed
cyprian opened this issue Apr 14, 2022 · 2 comments
Closed

Using existing blurred image as input #3

cyprian opened this issue Apr 14, 2022 · 2 comments

Comments

@cyprian
Copy link

cyprian commented Apr 14, 2022

Thank you for your awesome work!

I would like to test the diffusion from an input image.
Example. I create a blurry image then use a diffusion model to clean it.
By looking at your code for de-blurring, you take clear images from a folder and auto blur them before you do diffusion.
Is there a way to just use my already blurred input image? Or do I have to change the code to do that?

Thank you.

@bahjat-kawar
Copy link
Owner

The current code requires the original image for PSNR evaluation. If you want to apply it on an already blurred image without evaluation, a few changes to the code would be necessary. You would also need to make sure the code uses the correct blurring kernel.

@malekinho8
Copy link

malekinho8 commented Jun 23, 2022

Was anyone ever able to figure out how to implement this in the code? I have been looking through the code for hours debugging line by line, and I'm still not sure if it is possible. Any help or hints on this would be greatly appreciated.

Edit: To be specific, I'm currently trying to apply the code to perform a combination of linear inverses; inpainting, and super-resolution. An image sample may have an empty region of arbitrary size that I want to inpaint, and it may also have known regions that are low-resolution. I think I can see how to code up the inpainting part of the problem, as the Inpainting class defined in svd_replacement.py can fill in gaps of any shape as long as the mask is known (which it is). Would it be possible to combine this with the SuperResolution class somehow? i.e. inpaint where there are gaps, ignore high-resolution pixels, and restore low-resolution pixels.

Apologies for the long-winded question, I am just trying to understand the potential capability of this code. Thanks so much for putting this out and letting us experiment with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants