Skip to content
View SYED-M-HUSSAIN's full-sized avatar
🤖
Researching
🤖
Researching

Block or report SYED-M-HUSSAIN

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
SYED-M-HUSSAIN/README.md

Hello, I'm Syed Muhammad Hussain

👋 Hi, I’m Syed Muhammad Hussain

Machine Learning Engineer Badge Researcher Badge AI | Computer Vision Badge Robotics Enthusiast Badge

I am currently working as a Machine Learning Engineer at Beam AI and a Research Intern at the Empathic Computing Laboratory, University of South Australia. I specialize in Artificial Intelligence, Computer Vision, and Robotics. My expertise lies in developing innovative, data-driven solutions to address complex challenges, with a focus on scalability and efficiency.

👀 I’m Interested In

  • AI & Machine Learning: Building intelligent systems using advanced machine learning techniques.
  • Computer Vision: Developing vision-based solutions for automation and real-time systems.
  • Robotics: Exploring SLAM, path planning, and control systems.
  • Open Source Collaboration: Contributing to meaningful open-source projects.

Coding Gif

🌱 I’m Currently Learning

  • Advanced Machine Learning with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG).
  • Deep Learning techniques for signal processing and classification.
  • Enhancing object detection and segmentation models using YOLOv8.
  • Multimodal approaches combining AI and Robotics for real-world applications.

💞️ I’m Looking to Collaborate On

  • AI Research Projects: Let’s explore innovative solutions using state-of-the-art models.
  • Robotics Systems: Collaborate on building autonomous robotic systems.
  • Open Source Initiatives: Developing tools and solutions for the community.
  • Tech Education Platforms: Sharing knowledge and mentoring aspiring developers.

Collaboration Gif

📫 How to Reach Me

LinkedIn Badge Email Badge GitHub Badge Google Scholar Badge

⚡ Fun Fact

I’m inspired by Boston Dynamics’ robotic systems and aim to contribute to robotics technology to reduce redundant tasks and benefit humanity. When not coding, I enjoy mentoring students and working on impactful research projects.

Tech Gif

Pinned Loading

  1. Neural-Network-Approach Neural-Network-Approach Public

    Forked from google-research/tuning_playbook

    A playbook for systematically maximizing the performance of deep learning models.

  2. Camera_Inferencing_YOLOv8_Object_Detection Camera_Inferencing_YOLOv8_Object_Detection Public

    This Python script uses YOLOv8 from Ultralytics for real-time object detection using OpenCV. The script initializes a camera, loads the YOLOv8 model, and processes frames from the camera, annotatin…

    Python 8 3

  3. Implement-ViT-from-Scratch Implement-ViT-from-Scratch Public template

    This repository contains an implementation of a Vision Transformer (ViT) research paper tiitle "AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE" from scratch using PyTorch .

    Python 1

  4. Computer-Vision-Research-Project Computer-Vision-Research-Project Public

    Surveillance video analysis through computer vision involves the use of algorithms and techniques from the field of computer vision to extract useful information from surveillance footage. Computer…

    Jupyter Notebook

  5. Microbial-cell-segmentation Microbial-cell-segmentation Public

    🔍 This GitHub repository hosts a real-time microbial cell detection and segmentation application built on YOLOv8, a state-of-the-art deep learning model. It provides an intuitive web interface for …

    Python 1

  6. Ros2-Slam-RPlidar Ros2-Slam-RPlidar Public

    This guide walks you through the installation and execution of SLAM using the RPLidar A2/A3 on ROS2, leveraging the rf2o_laser_odometry and turtlebot4 packages for odometry and visualization.

    2