Problem: After installation, the locallab
command isn't recognized
'locallab' is not recognized as an internal or external command
Solution:
- Check if Python's Scripts directory is in PATH:
where python
- Add Python Scripts to PATH:
- Find your Python install location (e.g.,
C:\Users\YOU\AppData\Local\Programs\Python\Python311\
) - Add
Scripts
folder to PATH:C:\Users\YOU\AppData\Local\Programs\Python\Python311\Scripts\
- Restart your terminal
- Find your Python install location (e.g.,
Alternative: Use the module directly:
python -m locallab start
Problem: Installation fails with errors about missing compiler or CMake
error: Microsoft Visual C++ 14.0 or greater is required
or
'nmake' is not recognized
CMAKE_C_COMPILER not set
Solution:
- Install Build Tools:
- Download Microsoft C++ Build Tools
- During installation, select "Desktop development with C++"
- Install CMake:
- Download from cmake.org
- Add to PATH during installation
- Restart your terminal and try installation again:
pip install locallab --no-cache-dir
Problem: pip install
fails
ERROR: Could not find a version that satisfies the requirement locallab
Solution:
- Make sure you have Python 3.8 or higher:
python --version
- Update pip:
python -m pip install --upgrade pip
- Try installing with specific Python version:
python3.8 -m pip install locallab locallab-client
Problem: Import error with locallab_client
ImportError: No module named locallab_client
Solution:
Remember that while you install using pip install locallab-client
, you import using underscore:
# Correct imports:
from locallab_client import LocalLabClient # For async
from locallab_client import SyncLocalLabClient # For sync
Problem: Cannot connect to server
ConnectionError: Failed to connect to http://localhost:8000
Solution:
- Make sure the server is running:
locallab start
- Wait for the "Server is running" message
- Check if the URL is correct
- Try with explicit localhost:
client = SyncLocalLabClient("http://127.0.0.1:8000")
Issue: Insufficient VRAM
- Make sure your machine meets the minimum VRAM requirements for the selected model.
- Consider switching to a model with a lower VRAM footprint if necessary.
Issue: Errors during model download or loading
- Verify internet connectivity.
- Check that the model name in the registry is correct.
- Ensure sufficient system memory is available.
Issue: Ngrok authentication error
ERROR: Failed to create ngrok tunnel: authentication failed
Solution:
- Get a valid ngrok auth token from ngrok dashboard
- Set the token correctly:
import os
os.environ["NGROK_AUTH_TOKEN"] = "your_token_here"
Issue: Out of Memory (OOM) errors
RuntimeError: CUDA out of memory
Solution:
- Enable memory optimizations:
os.environ["LOCALLAB_ENABLE_QUANTIZATION"] = "true"
os.environ["LOCALLAB_QUANTIZATION_TYPE"] = "int8"
- Use a smaller model:
os.environ["HUGGINGFACE_MODEL"] = "microsoft/phi-2" # Smaller model
Issue: HuggingFace Authentication Error
ERROR: Failed to load model: Invalid credentials in Authorization header
Solution:
- Get a HuggingFace token from HuggingFace tokens page
- Set the token in one of these ways:
# Option 1: Environment variable os.environ["HUGGINGFACE_TOKEN"] = "your_token_here" # Option 2: Configuration file from locallab.cli.config import set_config_value set_config_value("huggingface_token", "your_token_here")
- Restart the LocalLab server
Note: Some models like microsoft/phi-2 require authentication to download.