Skip to content

[Docs] MacOS 安装与运行常见问题及解决方案 #2665

@N1ghtHeron

Description

@N1ghtHeron

免责声明

  1. 本文基于 Mac mini (M4),记录了 RVC整合包 在 MacOS 上安装和运行时遇到的问题与解决方案,仅供参考。
  2. 本文的解决方案相对简单粗暴。请提前做好备份。
  3. 写于 2025-08-20。若后续依赖更新,请以实际情况为准。

目录

  1. 安装相关问题
  2. 运行 infer-web.py 的问题
  3. 训练模型 的问题
  4. 特征提取 / 训练特征索引 / 推理 的问题 <- PyTorch 2.6 和 MPS 的问题集中在这里

1. 安装相关问题

1.1 brew python3.8 已下架

表现:

sh ./run.sh 后出现 python3.8: command not found,因为 brew 已经把 python3.8 下架了。

Create venv... 
Python 3 not found. Attempting to install 3.8... 
==> Fetching downloads for: python@3.8 Error: python@3.8 has been disabled because it is deprecated upstream! It was disabled on 2024-10-14. 
==> No outdated dependents to upgrade! 
./run.sh: line 33: python3.8: command not found 
./run.sh: line 34: .venv/bin/activate: No such file or directory

解决方法:

这里不推荐使用 sh ./run.sh
推荐先自己创建一个venv,使用python3.10,然后

pip install -r requirements.txt

创建虚拟环境的方法:

# 创建一个名为RVC的,python版本3.10的虚拟环境
conda create -n RVC python==3.10 

1.2 aria2 无法安装

表现:

aria2 只提供 Linux/Windows 的 wheel,没有 macOS 的(无论 Intel 还是 ARM)。

RuntimeError: Can't find tag for build.
Please set 'WHL_PLATFORM' env var or run this building on supported platform.
supported platform: manylinux_2_17_x86_64, musllinux_1_1_x86_64, manylinux_2_17_aarch64, musllinux_1_1_aarch64, win_amd64, win32

[end of output] 

note: This error originates from a subprocess, and is likely not a problem with pip. 
error: metadata-generation-failed 

× Encountered error while generating package metadata. 
╰─> See above for output. 

note: This is an issue with the package mentioned above, not pip.
hint: See above for details. 
working dir is /Users/usrname/RVC
downloading requirement aria2 check. 
failed. please install aria2

解决方法:

改用 brew 安装。

brew install aria2

1.3 fairseq 安装报错

表现:

找不到 fairseq/version.txt 。

Please use pip<24.1 if you need to use this version. 
INFO: pip is looking at multiple versions of hydra-core to determine which version is compatible with other requirements. This could take a while. 
Collecting fairseq 
Using cached fairseq-0.12.1.tar.gz (9.6 MB) 
Installing build dependencies ... done 
Getting requirements to build wheel ... error 
error: subprocess-exited-with-error 
× Getting requirements to build wheel did not run successfully. 
│ exit code: 1 
╰─> [16 lines of output] Traceback (most recent call last):

(中略)

File "/private/var/folders/06/h34_0c317v5g8jdknfvr19580000gn/T/pip-build-env-jumnxxwa/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 317, in run_setup 
exec(code, locals()) 
File "<string>", line 27, in <module> 
File "<string>", line 18, in write_version_py 
FileNotFoundError: [Errno 2] No such file or directory: 'fairseq/version.txt' 
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. 
error: subprocess-exited-with-error 

× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. 

note: This error originates from a subprocess, and is likely not a problem with pip.

或者 pip 24.x 新规则 + fairseq 的老依赖冲突。

Please use pip<24.1 if you need to use this version. 
INFO: pip is looking at multiple versions of hydra-core to determine which version is compatible with other requirements. This could take a while. 
ERROR: Cannot install fairseq and fairseq==0.12.2 because these package versions have conflicting dependencies. 

The conflict is caused by: fairseq 0.12.2 depends on omegaconf<2.1 hydra-core 1.0.7 depends on omegaconf<2.1 and >=2.0.5 

To fix this you could try to: 
1. loosen the range of package versions you've specified 
2. remove package versions to allow pip to attempt to solve the dependency conflict

解决方法:

先降级 pip,把依赖固定住,再装 fairseq(直接用官方 repo 安装,绕过 PyPI 的打包问题)。

pip install pip==23.3.1
pip install omegaconf==2.0.6 hydra-core==1.0.7 PyYAML==5.4.1
pip install git+https://github.com/facebookresearch/fairseq.git

然后可以测试一下是否安装成功。如果能输出版本号,说明安装成功,可以继续使用。

python -c "import fairseq; print(fairseq.__version__)"

如果此时发生了 OpenMP runtime 冲突,则可以只查看 fairseq 的版本。

pip show fairseq

2. 运行 infer-web.py 的问题

2.1 安装后依然找不到 ffmpeg

表现:

conda install ffmpeg 后,python 依然找不到 ffmpeg。因为import ffmpeg 需要的是 Python 包 ffmpeg-python

line 2, in <module> 
import ffmpeg 
ModuleNotFoundError: No module named 'ffmpeg'

解决方法:

安装即可。

pip install ffmpeg-python
# 安装后再运行 python infer-web.py

2.2 找不到 faiss

表现:

faiss 在 macOS ARM 上没有直接可用的 wheel,pip 默认会失败。

line 11, in <module> 
import faiss 
ModuleNotFoundError: No module named 'faiss'

解决方法:

安装 cpu 版本(因为 macOS 没有CUDA)。

pip install faiss-cpu

2.3 除 ffmpeg 和 faiss 以外的 ModuleNotFound

表现:

是不是解决完 pip install -r requirements.txt 的报错问题后,直接 python infer-web.py 了?

ModuleNotFoundError: No module named '(包名)'

解决方法:

再跑一遍

pip install -r requirements.txt

2.4 matplotlib3.7 / numpy 相关

表现:

如果你单独 pip install matplotlib==3.7.0,会导致 pip 在解决依赖时没有锁定 numpy 版本,顺带升级到了 numpy 2.x。

A module that was compiled using NumPy 1.x cannot be run in NumPy 2.2.6 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. 

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.

解决方法:

直接降级 numpy。

pip install numpy==1.26.4 --force-reinstall

3. 训练模型 的问题

3.1 删模型时记得删log

作者在训练模型这一块没遇到问题。
如果你遇到问题,也许只需要在删模型时记得删log。

4. 特征提取 / 训练特征索引 / 推理 的问题

mac 在这里出现的问题太多了,几乎都是 PyTorch 2.6 和 MPS 导致的。

解决思路:

  • 不降级 PyTorch,因此需要手动修改项目中各处加载模型时的参数(参考 issue #2466)。
  • 在启动时禁用MPS,强制单线程、CPU运行

下面先给出 个人的 完整解决步骤,再对诸多问题进行讲解。

4.1 统一解决步骤

STEP 0. 备份要改的文件

STEP 1. 在 infer/modules/train/extract_feature_print.py 添加如下代码

感谢 issue #2575 @Pr0gCat 提供的解决办法。

# HuBERT model
printt("load model(s) from {}".format(model_path))

# if hubert model does not exist
if os.access(model_path, os.F_OK) == False:
    printt(
        "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main"
        % model_path
    )
    exit(0)

# ------ADD HERE-------
torch.serialization.add_safe_globals([fairseq.data.dictionary.Dictionary])
# ------ADD HERE-------

models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
    [model_path],
    suffix="",
)

STEP 2. 改 infer/modules/vc/utils.py,完整代码如下

import torch
from fairseq import checkpoint_utils
from fairseq.data.dictionary import Dictionary

def get_index_path_from_model(sid):
    return next(
        (
            f
            for f in [
                os.path.join(root, name)
                for root, _, files in os.walk(os.getenv("index_root"), topdown=False)
                for name in files
                if name.endswith(".index") and "trained" not in name
            ]
            if sid.split(".")[0] in f
        ),
        "",
    )

def load_hubert(config):
    with torch.serialization.safe_globals([Dictionary]):
        models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
            ["assets/hubert/hubert_base.pt"],
            suffix="",
            strict=False
        )
    hubert_model = models[0]
    hubert_model.to(config.device)
    hubert_model = hubert_model.half() if config.is_half else hubert_model.float()
    return hubert_model.eval()

STEP 3. 改 infer/modules/vc/modules.py,完整代码如下

import os
import traceback
import logging
from io import BytesIO

import numpy as np
import soundfile as sf
import torch
from fairseq.data.dictionary import Dictionary

from infer.lib.audio import load_audio, wav2
from infer.lib.infer_pack.models import (
    SynthesizerTrnMs256NSFsid,
    SynthesizerTrnMs256NSFsid_nono,
    SynthesizerTrnMs768NSFsid,
    SynthesizerTrnMs768NSFsid_nono,
)
from infer.modules.vc.pipeline import Pipeline
from infer.modules.vc.utils import *

logger = logging.getLogger(__name__)


def load_hubert(config):
    """安全加载 Hubert 模型,兼容 PyTorch 2.6+"""
    from fairseq import checkpoint_utils

    hubert_path = "assets/hubert/hubert_base.pt"
    logger.info(f"Loading Hubert model from {hubert_path}")
    try:
        with torch.serialization.safe_globals([Dictionary]):
            models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
                [hubert_path], suffix="", strict=False
            )
        hubert_model = models[0]
        hubert_model.to(config.device)
        hubert_model = hubert_model.half() if config.is_half else hubert_model.float()
        return hubert_model.eval()
    except Exception as e:
        logger.error(f"Failed to load Hubert model: {e}")
        raise RuntimeError("Hubert model loading failed") from e


class VC:
    def __init__(self, config):
        self.n_spk = None
        self.tgt_sr = None
        self.net_g = None
        self.pipeline = None
        self.cpt = None
        self.if_f0 = None
        self.version = None
        self.hubert_model = None
        self.config = config

    def get_vc(self, sid, *to_return_protect):
        logger.info(f"Get sid: {sid}")
        to_return_protect0 = {
            "visible": self.if_f0 != 0,
            "value": (to_return_protect[0] if self.if_f0 != 0 and to_return_protect else 0.5),
            "__type__": "update",
        }
        to_return_protect1 = {
            "visible": self.if_f0 != 0,
            "value": (to_return_protect[1] if self.if_f0 != 0 and to_return_protect else 0.33),
            "__type__": "update",
        }

        if sid == "" or sid == []:
            if self.hubert_model is not None:
                logger.info("Clean model cache")
                del self.net_g, self.n_spk, self.hubert_model, self.tgt_sr
                self.hubert_model = self.net_g = self.n_spk = self.tgt_sr = None
                if torch.cuda.is_available():
                    torch.cuda.empty_cache()
            return (
                {"visible": False, "__type__": "update"},
                {"visible": True, "value": to_return_protect0, "__type__": "update"},
                {"visible": True, "value": to_return_protect1, "__type__": "update"},
                "",
                "",
            )

        person = f'{os.getenv("weight_root")}/{sid}'
        logger.info(f"Loading: {person}")

        self.cpt = torch.load(person, map_location="cpu")
        self.tgt_sr = self.cpt["config"][-1]
        self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0]  # n_spk
        self.if_f0 = self.cpt.get("f0", 1)
        self.version = self.cpt.get("version", "v1")

        synthesizer_class = {
            ("v1", 1): SynthesizerTrnMs256NSFsid,
            ("v1", 0): SynthesizerTrnMs256NSFsid_nono,
            ("v2", 1): SynthesizerTrnMs768NSFsid,
            ("v2", 0): SynthesizerTrnMs768NSFsid_nono,
        }
        self.net_g = synthesizer_class.get((self.version, self.if_f0), SynthesizerTrnMs256NSFsid)(
            *self.cpt["config"], is_half=self.config.is_half
        )
        del self.net_g.enc_q
        self.net_g.load_state_dict(self.cpt["weight"], strict=False)
        self.net_g.eval().to(self.config.device)
        self.net_g = self.net_g.half() if self.config.is_half else self.net_g.float()

        # 初始化 pipeline
        self.pipeline = Pipeline(self.tgt_sr, self.config)

        n_spk = self.cpt["config"][-3]
        index = {"value": get_index_path_from_model(sid), "__type__": "update"}
        logger.info(f"Select index: {index['value']}")

        return (
            (
                {"visible": True, "maximum": n_spk, "__type__": "update"},
                to_return_protect0,
                to_return_protect1,
                index,
                index,
            )
            if to_return_protect
            else {"visible": True, "maximum": n_spk, "__type__": "update"}
        )

    def vc_single(
        self,
        sid,
        input_audio_path,
        f0_up_key,
        f0_file,
        f0_method,
        file_index,
        file_index2,
        index_rate,
        filter_radius,
        resample_sr,
        rms_mix_rate,
        protect,
    ):
        if input_audio_path is None:
            return "You need to upload an audio", None
        try:
            audio = load_audio(input_audio_path, 16000)
            audio_max = np.abs(audio).max() / 0.95
            if audio_max > 1:
                audio /= audio_max
            times = [0, 0, 0]

            # 安全加载 Hubert
            if self.hubert_model is None:
                self.hubert_model = load_hubert(self.config)
            if self.pipeline is None:
                self.pipeline = Pipeline(self.tgt_sr, self.config)

            if file_index:
                file_index = file_index.strip().replace('"', '').replace("\n", "").replace("trained", "added")
            elif file_index2:
                file_index = file_index2
            else:
                file_index = ""

            audio_opt = self.pipeline.pipeline(
                self.hubert_model,
                self.net_g,
                sid,
                audio,
                input_audio_path,
                times,
                int(f0_up_key),
                f0_method,
                file_index,
                index_rate,
                self.if_f0,
                filter_radius,
                self.tgt_sr,
                resample_sr,
                rms_mix_rate,
                self.version,
                protect,
                f0_file,
            )

            tgt_sr = resample_sr if self.tgt_sr != resample_sr >= 16000 else self.tgt_sr
            index_info = "Index:\n%s." % file_index if os.path.exists(file_index) else "Index not used."
            return "Success.\n%s\nTime:\nnpy: %.2fs, f0: %.2fs, infer: %.2fs." % (index_info, *times), (tgt_sr, audio_opt)

        except Exception:
            info = traceback.format_exc()
            logger.warning(info)
            return info, (None, None)

    def vc_multi(
        self,
        sid,
        dir_path,
        opt_root,
        paths,
        f0_up_key,
        f0_method,
        file_index,
        file_index2,
        index_rate,
        filter_radius,
        resample_sr,
        rms_mix_rate,
        protect,
        format1,
    ):
        try:
            dir_path = dir_path.strip().replace('"', '').replace("\n", "")
            opt_root = opt_root.strip().replace('"', '').replace("\n", "")
            os.makedirs(opt_root, exist_ok=True)

            if dir_path:
                paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)]
            else:
                paths = [path.name for path in paths]

            infos = []
            for path in paths:
                info, opt = self.vc_single(
                    sid,
                    path,
                    f0_up_key,
                    None,
                    f0_method,
                    file_index,
                    file_index2,
                    index_rate,
                    filter_radius,
                    resample_sr,
                    rms_mix_rate,
                    protect,
                )
                if "Success" in info:
                    try:
                        tgt_sr, audio_opt = opt
                        if format1 in ["wav", "flac"]:
                            sf.write(f"{opt_root}/{os.path.basename(path)}.{format1}", audio_opt, tgt_sr)
                        else:
                            path_out = f"{opt_root}/{os.path.basename(path)}.{format1}"
                            with BytesIO() as wavf:
                                sf.write(wavf, audio_opt, tgt_sr, format="wav")
                                wavf.seek(0)
                                with open(path_out, "wb") as outf:
                                    wav2(wavf, outf, format1)
                    except Exception:
                        info += traceback.format_exc()
                infos.append(f"{os.path.basename(path)}->{info}")
                yield "\n".join(infos)
            yield "\n".join(infos)

        except Exception:
            yield traceback.format_exc()

STEP 4. 使用以下命令启动 webui

禁用 CUDA、禁止 matplotlib GUI 后端、强制单线程 CPU 运行,
避免在 MacOS + CPU 环境 由于 多进程/多线程冲突 导致的 segmentation fault 和 resource_tracker 警告。

export CUDA_VISIBLE_DEVICES=""    # 全程 CPU,禁用 MPS/GPU
export PYTORCH_ENABLE_MPS_FALLBACK=1    # 让 PyTorch 在使用 MPS 时自动 fallback 到 CPU。这行其实不会生效,但以防万一
export MPLBACKEND=Agg    # 强制 matplotlib 使用 Agg 后端
export OMP_NUM_THREADS=1    # 限制 OpenMP 只开 1 个线程
export MKL_NUM_THREADS=1    # 限制 Intel MKL 只开 1 个线程
python infer-web.py

4.2 问题一览

4.2.1 HuBERT特征提取 卡死

表现:

  • 在 webui 卡死;
infer/modules/train/extract_feature_print.py mps 1 0 /Users/usrname/RVC/logs/model-test v2 False exp_dir: /Users/usrname/RVC/logs/model-test 
load model(s) from assets/hubert/hubert_base.pt
  • 在控制台:
    (本段报错信息来自 issue #2466):
Traceback (most recent call last):
  File "D:\RVC\infer\modules\train\extract_feature_print.py", line 89, in <module>
    models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
  File "D:\RVC\.venv\lib\site-packages\fairseq\checkpoint_utils.py", line 425, in load_model_ensemble_and_task
    state = load_checkpoint_to_cpu(filename, arg_overrides)
  File "D:\RVC\.venv\lib\site-packages\fairseq\checkpoint_utils.py", line 315, in load_checkpoint_to_cpu
    state = torch.load(f, map_location=torch.device("cpu"))
  File "D:\RVC\.venv\lib\site-packages\torch\serialization.py", line 1470, in load
    raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
        (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
        (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
        WeightsUnpickler error: Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Dictionary])` or the `torch.serialization.safe_globals([Dictionary])` context manager to allowlist this global if you trust this class/function.

总之就是包含了这段的:

_pickle.UnpicklingError: Weights only load failed.
Unsupported global: GLOBAL fairseq.data.dictionary.Dictionary
Please use torch.serialization.add_safe_globals([fairseq.data.dictionary.Dictionary])

或者:

AttributeError: 'NoneType' object has no attribute 'tobytes'

甚至可能由于该代码导致出现这种:

TypeError: load_model_ensemble_and_task() got an unexpected keyword argument 'checkpoint_loader'

PyTorch 2.6+ 和 fairseq checkpoint 加载方式不兼容,导致 hubert 加载失败,推理返回了 None,然后 Gradio 在处理音频数据时因为 None 调用 .tobytes() 报错。

解决方法:STEP 1, 2

4.2.2 self.pipeline 为 None

表现:

pipeline 没有初始化。

File "/Users/usrname/RVC/infer/modules/vc/modules.py", line 188, in vc_single 
audio_opt = self.pipeline.pipeline(
AttributeError: 'NoneType' object has no attribute 'pipeline'

解决方法:STEP 3

4.2.3 segmentation fault

表现:

macOS 上 MPS 后端不稳定,MPS + 多进程组合容易误报或导致崩溃。报错包括但不限于

[1] 18324 segmentation fault
[1] 19898 segmentation fault
[1] 20536 segmentation fault

解决方法:STEP 4

4.2.4 选 rmvpe 后报错

表现:

训练模型/推理时选 rmvpe 后报错,类似

INFO:infer.modules.vc.pipeline:Loading rmvpe model,assets/rmvpe/rmvpe.pt
DEBUG:httpcore.http11:receive_response_headers.failed exception=ReadTimeout(TimeoutError())

解决方法:先用别的过一遍后再用rmvpe可能就好了

怎么回事呢?

5. 总结

MacOS环境下,RVC的问题主要集中在 依赖兼容、PyTorch 版本过高 和MPS/CPU 的多进程冲突 上。
本文运行RVC的最终配置是:Python 3.10 + PyTorch 2.6 + 启动时禁用 MPS + CPU 单线程运行

6. 致谢

感谢在 issue 区提供解决方案的各位大佬 🙏
感谢 ChatGPT 😭

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions