Skip to content

Latest commit

 

History

History
167 lines (144 loc) · 4.4 KB

troubleshooting.md

File metadata and controls

167 lines (144 loc) · 4.4 KB

Troubleshooting Guide

🔍 Common Issues for Beginners

Windows-Specific Issues

'locallab' Command Not Found

Problem: After installation, the locallab command isn't recognized

'locallab' is not recognized as an internal or external command

Solution:

  1. Check if Python's Scripts directory is in PATH:
    where python
  2. Add Python Scripts to PATH:
    • Find your Python install location (e.g., C:\Users\YOU\AppData\Local\Programs\Python\Python311\)
    • Add Scripts folder to PATH: C:\Users\YOU\AppData\Local\Programs\Python\Python311\Scripts\
    • Restart your terminal

Alternative: Use the module directly:

python -m locallab start

Compiler/Build Tools Missing

Problem: Installation fails with errors about missing compiler or CMake

error: Microsoft Visual C++ 14.0 or greater is required

or

'nmake' is not recognized
CMAKE_C_COMPILER not set

Solution:

  1. Install Build Tools:
  2. Install CMake:
    • Download from cmake.org
    • Add to PATH during installation
  3. Restart your terminal and try installation again:
    pip install locallab --no-cache-dir

Installation Issues

Problem: pip install fails

ERROR: Could not find a version that satisfies the requirement locallab

Solution:

  1. Make sure you have Python 3.8 or higher:
    python --version
  2. Update pip:
    python -m pip install --upgrade pip
  3. Try installing with specific Python version:
    python3.8 -m pip install locallab locallab-client

Import Issues

Problem: Import error with locallab_client

ImportError: No module named locallab_client

Solution: Remember that while you install using pip install locallab-client, you import using underscore:

# Correct imports:
from locallab_client import LocalLabClient  # For async
from locallab_client import SyncLocalLabClient  # For sync

Connection Issues

Problem: Cannot connect to server

ConnectionError: Failed to connect to http://localhost:8000

Solution:

  1. Make sure the server is running:
    locallab start
  2. Wait for the "Server is running" message
  3. Check if the URL is correct
  4. Try with explicit localhost:
    client = SyncLocalLabClient("http://127.0.0.1:8000")

Common Issues

Model Loading and Inference

Issue: Insufficient VRAM

  • Make sure your machine meets the minimum VRAM requirements for the selected model.
  • Consider switching to a model with a lower VRAM footprint if necessary.

Issue: Errors during model download or loading

  • Verify internet connectivity.
  • Check that the model name in the registry is correct.
  • Ensure sufficient system memory is available.

Google Colab Specific Issues

Issue: Ngrok authentication error

ERROR: Failed to create ngrok tunnel: authentication failed

Solution:

  1. Get a valid ngrok auth token from ngrok dashboard
  2. Set the token correctly:
import os
os.environ["NGROK_AUTH_TOKEN"] = "your_token_here"

Issue: Out of Memory (OOM) errors

RuntimeError: CUDA out of memory

Solution:

  1. Enable memory optimizations:
os.environ["LOCALLAB_ENABLE_QUANTIZATION"] = "true"
os.environ["LOCALLAB_QUANTIZATION_TYPE"] = "int8"
  1. Use a smaller model:
os.environ["HUGGINGFACE_MODEL"] = "microsoft/phi-2"  # Smaller model

Authentication Issues

Issue: HuggingFace Authentication Error

ERROR: Failed to load model: Invalid credentials in Authorization header

Solution:

  1. Get a HuggingFace token from HuggingFace tokens page
  2. Set the token in one of these ways:
    # Option 1: Environment variable
    os.environ["HUGGINGFACE_TOKEN"] = "your_token_here"
    
    # Option 2: Configuration file
    from locallab.cli.config import set_config_value
    set_config_value("huggingface_token", "your_token_here")
  3. Restart the LocalLab server

Note: Some models like microsoft/phi-2 require authentication to download.

Related Documentation