why is loss reduced when distributed training #9167
Answered
by
jbwang1997
LukeCho-8810
asked this question in
Q&A
-
Hi, thanks for your great work. I dont know why loss is reduced explicitly when distributed training. Is this not be done in pytorch ddp mmdetection/mmdet/models/detectors/base.py Line 214 in ca11860 |
Beta Was this translation helpful? Give feedback.
Answered by
jbwang1997
Jun 23, 2022
Replies: 1 comment
-
Sorry for not getting your question. mmdetection is built on pytorch. We directly use pytorch DDP in distributed training. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ZwwWayne
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Sorry for not getting your question. mmdetection is built on pytorch. We directly use pytorch DDP in distributed training.