You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* rewrite the installation guide
* move installation trouble shooting to the FAQ page
* fix typo
* fix the config file path in the demo codes
* minor rephrases
* minor update of the doc format
* typo fix
* update fap link
* fix a typo
* update config to fix the error with anchors
Copy file name to clipboardexpand all lines: docs/en/faq.md
+76-6
Original file line number
Diff line number
Diff line change
@@ -2,16 +2,86 @@
2
2
3
3
We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided templates](https://github.com/open-mmlab/mmdetection/blob/master/.github/ISSUE_TEMPLATE/error-report.md/) and make sure you fill in all required information in the template.
4
4
5
-
## MMCV Installation
5
+
## Installation
6
6
7
7
- Compatibility issue between MMCV and MMDetection; "ConvWS is already registered in conv layer"; "AssertionError: MMCV==xxx is used but incompatible. Please install mmcv>=xxx, <=xxx."
8
8
9
-
Please install the correct version of MMCV for the version of your MMDetection following the [installation instruction](https://mmdetection.readthedocs.io/en/latest/get_started.html#installation).
9
+
Compatible MMDetection and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
If you simply use `pip install albumentations>=0.3.2`, it will install `opencv-python-headless` simultaneously (even though you have already installed `opencv-python`).
54
+
Please refer to the [official documentation](https://albumentations.ai/docs/getting_started/installation/#note-on-opencv-dependencies) for details.
55
+
56
+
- ModuleNotFoundError is raised when using some algorithms
57
+
58
+
Some extra dependencies are required for Instaboost, Panoptic Segmentation, LVIS dataset, etc. Please note the error message and install corresponding packages, e.g.,
- Do I need to reinstall mmdet after some code modifications
72
+
73
+
If you follow the best practice and install mmdet with `pip install -e .`, any local modifications made to the code will take effect without reinstallation.
74
+
75
+
- How to develop with multiple MMDetection versions
76
+
77
+
You can have multiple folders like mmdet-2.21, mmdet-2.22.
78
+
When you run the train or test script, it will adopt the mmdet package in the current folder.
79
+
80
+
To use the default MMDetection installed in the environment rather than the one you are working with, you can remove the following line in those scripts:
81
+
82
+
```shell
83
+
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
84
+
```
15
85
16
86
## PyTorch/CUDA Environment
17
87
@@ -82,7 +152,7 @@ We list some common troubles faced by many users and their corresponding solutio
82
152
83
153
- "RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one"
84
154
1. This error indicates that your module has parameters that were not used in producing loss. This phenomenon may be caused by running different branches in your code in DDP mode.
85
-
2. You can set`find_unused_parameters = True`in the config to solve the above problems(but this will slow down the training speed.
155
+
2. You can set`find_unused_parameters = True`in the config to solve the above problems(but this will slow down the training speed.
86
156
3. If the version of your MMCV >= 1.4.1, you can get the name of those unused parameters with `detect_anomalous_params=True`in`optimizer_config` of config.
87
157
88
158
- Save the best model
@@ -91,7 +161,7 @@ We list some common troubles faced by many users and their corresponding solutio
91
161
92
162
- Resume training with `ExpMomentumEMAHook`
93
163
94
-
If you use `ExpMomentumEMAHook` in training, you can't just use command line parameters `--resume-from` nor `--cfg-options resume_from` to restore model parameters during resume, i.e., the command `python tools/train.py configs/yolox/yolox_s_8x8_300e_coco.py --resume-from ./work_dir/yolox_s_8x8_300e_coco/epoch_x.pth` will not work. Since `ExpMomentumEMAHook` needs to reload the weights, taking the `yolox_s` algorithm as an example, you should modify the values of `resume_from` in two places of the config as below:
164
+
If you use `ExpMomentumEMAHook` in training, you can't just use command line parameters `--resume-from` nor `--cfg-options resume_from` to restore model parameters during resume, i.e., the command `python tools/train.py configs/yolox/yolox_s_8x8_300e_coco.py --resume-from ./work_dir/yolox_s_8x8_300e_coco/epoch_x.pth` will not work. Since `ExpMomentumEMAHook` needs to reload the weights, taking the `yolox_s` algorithm as an example, you should modify the values of `resume_from` in two places of the config as below:
95
165
96
166
```python
97
167
# Open configs/yolox/yolox_s_8x8_300e_coco.py directly and modify all resume_from fields
@@ -130,6 +200,6 @@ We list some common troubles faced by many users and their corresponding solutio
130
200
131
201
ResNeXt comes from the paper [`Aggregated Residual Transformations for Deep Neural Networks`](https://arxiv.org/abs/1611.05431). It introduces group and uses “cardinality” to control the number of groups to achieve a balance between accuracy and complexity. It controls the basic width and grouping parameters of the internal Bottleneck module through two hyperparameters `baseWidth` and `cardinality`. An example configuration name in MMDetection is `mask_rcnn_x101_64x4d_fpn_mstrain-poly_3x_coco.py`, where `mask_rcnn` represents the algorithm using Mask R-CNN, `x101` represents the backbone network using ResNeXt-101, and `64x4d` represents that the bottleneck block has 64 group and each group has basic width of 4.
132
202
133
-
- `norm_eval` in backbone
203
+
- `norm_eval` in backbone
134
204
135
205
Since the detection model is usually large and the input image resolution is high, this will result in a small batch of the detection model, which will make the variance of the statistics calculated by BatchNorm during the training process very large and not as stable as the statistics obtained during the pre-training of the backbone network . Therefore, the `norm_eval=True` mode is generally used in training, and the BatchNorm statistics in the pre-trained backbone network are directly used. The few algorithms that use large batches are the `norm_eval=False` mode, such as NASFPN. For the backbone network without ImageNet pre-training and the batch is relatively small, you can consider using `SyncBN`.
0 commit comments