Skip to content

Conversation

dcasbol
Copy link
Contributor

@dcasbol dcasbol commented Jul 30, 2025

In the current version, drop_block2d only seems to work with block sizes that are odd-sized i.e. 3, 5, 7, 9, ... However, this is not covered in the documentation so it took me a bit to figure out. Fixing it requires applying asymmetric padding, which max_pool2d doesn't support, so an explicit padding via F.pad is necessary in this case. Also, I reckon that it might be much simpler to comment this aspect in the documentation and include a check in the code than accepting even block sizes.

This fix only applies to the 2d version and does not fix drop_block3d.

Copy link

pytorch-bot bot commented Jul 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9157

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit beb4b6e with merge base b208f7f (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

meta-cla bot commented Jul 30, 2025

Hi @dcasbol!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

Copy link

meta-cla bot commented Jul 30, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the cla signed label Jul 30, 2025
@NicolasHug
Copy link
Member

Thanks for the PR @dcasbol ! Can you help me understand what happens when passing an even block size at the moment? Is it a loud error, or a silent bug?

Also, I reckon that it might be much simpler to comment this aspect in the documentation and include a check in the code than accepting even block sizes

I agree it'd be simpler. Do you think you could edit the PR to include this?

@dcasbol
Copy link
Contributor Author

dcasbol commented Sep 3, 2025

Hi Nicolas! thanks for taking a look at this. The bug is silent, and I only noticed it further down the line because the activation maps were smaller than expected and that triggered some errors.

noise = torch.empty((N, C, H - block_size + 1, W - block_size + 1), dtype=input.dtype, device=input.device)
noise.bernoulli_(gamma)

noise = F.pad(noise, [block_size // 2] * 4, value=0)
noise = F.max_pool2d(noise, stride=(1, 1), kernel_size=(block_size, block_size), padding=block_size // 2)

The issue is basically that the code assumes a block of odd size, for which the block_size // 2 operations provide the expected values, but doing so on an even-sized block introduces an extra position. I made a simple visualization about this:

image

You can see that, for the even-sized block, the padding adds one more position than actually needed, producing a 10x10 map after the (again wrongly padded) max_pool2d operation, instead of 8x8.

And actually, now that I look closer at this, I realize that even-sized blocks feel kind of wrong, because they are off-centered by definition. I will edit the PR to include the check on the block size and raise an error if it is not even.

Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @dcasbol !

@NicolasHug NicolasHug merged commit 094e7af into pytorch:main Sep 5, 2025
59 of 60 checks passed
Copy link

github-actions bot commented Sep 5, 2025

Hey @NicolasHug!

You merged this PR, but no labels were added.
The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants