🤖

Md Shahriar Forhad

About

PhD researcher focused on robotics, laser-based manufacturing, and applied machine learning. Experienced with PLC/industrial comms (Siemens & Profinet), multi-kW laser systems, robotic calibration (ABB/KUKA), and end-to-end Python control of motion stages, galvo scanners, and laser processes.

Interests: Machine Learning, Robotics, PLC, Cyber-physical Systems, Power Systems, Automation

Certifications

  • Engineer-In-Training (EIT), TexasIssued 2025 · Expires 2033 · Certificate #84003
  • Google Data Analytics CertificateIssued 2023 · Verify

Education

Research Experience

Skills

Machine Learning (CNN/LSTM/Transformers, KNN) Robotics (ABB, KUKA), PLC (Siemens 1515SP), Profinet Laser Systems (1kW/8kW LaserLine), Femto-second coding Python (4yr), R (2yr), SAS; Web scraping; MySQL SolidWorks, Fusion 360, SIMIO, MATLAB Embedded: Raspberry Pi, Arduino; URDF; G-code; TCP/IP/Serial

Publications & Preprints

Experience

Python Packages (PyPI)

pixhawkcontroller · v0.0.2 · MIT

Utilities for connecting to Pixhawk/ArduPilot via pymavlink: serial/UDP/TCP, quick RC/servo control, telemetry snapshot, mode switching, buzzer tunes.

Install
pip install pixhawkcontroller
Quick start
from pixhawkcontroller import FlightControllerInterface, TonesQb
      
      fc = FlightControllerInterface()
      fc.connect()
      fc.print_info()
      fc.set_servo(9, 1500)
      fc.play_tune(TonesQb.twinkle_little_star)
      fc.close()

Mask R-CNN — Fruits

Instance segmentation on the Kaggle Fruits dataset using a Mask R-CNN, I coded the backbone from scratch.
Backbone: ResNet-13 · Dataset: Kaggle Fruits · Task: apple/banana/orange instance masks

Click to view full image.
Full-size Mask R-CNN fruits results
Notes on the model
  • Built step by step: Mask R-CNN trained on the Kaggle Fruits set (apple, banana, orange). I resize everything to 200×200 for speed.
  • Backbones I tried: a small CNN, a ResNet-style net, ResNet-13, and a ResNet-13 with Hadamard residuals. You can switch with CustomBackbone_list_n; backbone.out_channels is set accordingly (e.g., 2048).
  • Proposals & pooling: custom AnchorGenerator and MultiScaleRoIAlign, with tweakable IoU/NMS thresholds for both the RPN and ROI heads.
  • Data pipeline: loads preprocessed .pth files (images, bbox, labels, optional masks). Works with rectangle or polygon masks and can add Gaussian noise during training.
  • Training setup: Adam with L2 weight decay, TensorBoard charts (train/val loss, IoU, label accuracy), optional early-stop on val accuracy, rolling epoch checkpoints, plus a separate “best model” save.
  • Resume & evaluate: can reload from a checkpoint or full model. The eval script runs NMS and shows ground truth vs. predictions side-by-side with mask overlays and confidence filtering.

Demo Video – Reinforcement Learning (PPO) for Bipedal Walker

Demonstration of training and evaluating an AI agent to learn walking behavior using Proximal Policy Optimization (PPO) in a physics-based simulation.

Notes on what is demonstrated
  • Policy Learning: The agent learns to walk through trial-and-error interactions with the environment.
  • Continuous Control: Actions are smooth and continuous, suitable for realistic robotic motion.
  • Reward-Based Training: The agent is guided by rewards that encourage stable and forward movement.
  • Deterministic Evaluation: Final performance is evaluated using mean actions for consistent results.
  • Checkpointing: The best-performing models are automatically saved during training.
  • Reproducibility: Training and evaluation are designed to be repeatable and verifiable.

Demo Video – Reinforcement Learning (PPO) for Car Racing

Demonstration of training and evaluating an autonomous driving agent using Proximal Policy Optimization (PPO) in the CarRacing-v3 simulation environment. The agent learns directly from raw visual input to control steering, throttle, and braking.

Notes on what is demonstrated
  • Visual Policy Learning: The agent learns driving behavior from RGB images without access to privileged state variables.
  • Continuous Control: Steering, throttle, and braking actions are continuous and smoothly applied.
  • Reward-Guided Driving: Training rewards encourage progress along the track, stability, and lap completion.
  • Deterministic Evaluation: Final demonstrations use deterministic policy execution for consistent behavior.
  • Automatic Checkpointing: The best-performing models are saved during training based on evaluation performance.
  • Reproducibility: Training and evaluation are structured to produce repeatable and verifiable results.

Demo Video - Laser Powder Bed Fusion (LPBF)

Short demo of my Laser Powder Bed Fusion (LPBF) setup using a custom Python/pygame GUI to coordinate XYZ linear stages and a galvo scanner in real time.

Demo Video - ABB Robot Absolute & Relative Movements

Demonstration of ABB robot executing various motion modes using RAPID instructions.

Notes on performed movements
  • Absolute Axis Movement: Direct movement of individual robot axes to specified absolute positions.
  • Absolute Linear Movement: Straight-line motion between points in Cartesian space.
  • Absolute Circular Movement: Circular interpolation movement between points along a defined arc.
  • Relative Axis Movement: Movement relative to the robot’s current joint angles.
  • Relative Linear Movement: Translation by a specified offset from the current Cartesian position.
  • Absolute Linear Movement synchronized with positioner 2-axis movement: Coordinated robot and positioner motion for complex toolpath execution.

Defense Manufacturing Expo — UTRGV (2022)

Lead-through teaching demo: I moved the robot by hand while the system recorded waypoints to a file, then replayed them automatically (including pick-and-place). Entire workflow in Python.

URDF Robot Demo

A live 3D viewer of one of my robot model (URDF). You can rotate, zoom, and move the joints.