Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Fixing test_gpu_sampling_DataLoader that fails with torch 2.7. #7876

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

drivanov
Copy link
Contributor

Description

Beginning with PyTorch 2.7.0a, the test_gpu_sampling_DataLoader test consistently fails, producing the following error:

group = group or _get_default_group()
>       work = group.alltoall_base(
            output, input, output_split_sizes, input_split_sizes, opts
        )
E       RuntimeError: No backend type associated with device type cpu
E       This exception is thrown by __iter__ of MiniBatchTransformer(datapipe=Bufferer, transformer=_seeds_cooperative_exchange_1_wait_future)

/usr/local/lib/python3.12/dist-packages/torch/distributed/distributed_c10d.py:4390: RuntimeError

Since the gloo backend does not support alltoall:

        group = group or _get_default_group()
>       work = group.alltoall(output_tensor_list, input_tensor_list, opts)
E       RuntimeError: Backend gloo does not support alltoall
E       This exception is thrown by __iter__ of MiniBatchTransformer(datapipe=Bufferer, transformer=_seeds_cooperative_exchange_2)

and the ucc backend is slated for deprecation, the mpi backend is now the only viable option for this test.

This PR introduces backward compatibility, allowing the test to run on older and newer PyTorch versions.

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant