Path planning algorithms and upper limb segmentation using instance segmentation models are utilized to generate obstacle-free trajectories to the printing locations, mitigating interference from hand and forearm.
This mainly uses a monocular camera to create a grid-based environment to create trajectories for the robot to follow.
- Python 3.x
- OpenCV
- NumPy
- Requests
- Supervision (sv)
- Inference models (from
inference.models.utils
)
-
Clone the repository:
git clone https://github.com/AnujithM/Eye-Gaze-Controlled-Robot.github.io.git cd Eye-Gaze-Controlled-Robot.github.io/Planner
-
Install the required Python packages:
pip install opencv-python-headless numpy requests supervision
-
Replace
model_id
andapi_key
in the script with your specific model ID and API key. -
Update the
url
variable with the correct MJPEG stream URL. -
Run the script:
python DynamicAstar_V12.py
-
Fetch and Decode Frames: The script continuously fetches frames from the specified MJPEG stream URL and decodes them using OpenCV.
-
Hand Segmentation: The frames are passed through a hand segmentation model to detect and segment hands in the frame.
-
Grid and Adjacency List: A grid overlay is created on the frame, dividing it into cells. An adjacency list is maintained to represent the connections between the grid cells.
-
Dynamic A Pathfinding:* The dynamic A* algorithm finds a path from a source cell to a goal cell, avoiding cells containing hands (red centroids).
-
Frame Annotation: The segmented frame is annotated with the detected hands, grid cells, and the path found by the A* algorithm.
-
Real-time Display: The annotated frame is displayed in real-time, showing the segmented hands, grid, and path.
Parts of this project page were adopted from the Nerfies page.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.