Skip to content

KAIT 2026

Status: Active Conference: PENS-KAIT 2026 (collab with Kanagawa Institute of Technology) Paper: "Adaptive Hierarchical Detection: A Two-Phase Framework for Context-Aware in Edge Device" Repo: /ductor/workspace/research/raspibotv2-kait2026


System

Hardware: Yahboom Raspbot on Raspberry Pi 5

Two-phase pipeline:

Phase 1: Scene Recognition
  └── Places365 GoogLeNet
        └── Classifies scene context

Phase 2: Context-Aware Object Detection
  └── YOLO switching based on Phase 1 output
        ├── YOLO26n (lightweight, speed-priority)
        └── YOLOWorld (open-vocab, context-rich scenes)

Runtime: NCNN (optimized for RPi edge inference)

Training

  • Notebook: yolov8s_world_objects365_finetune.ipynb
  • Run path on RPi: /home/takanolab/yahboom_control/PENS-KAIT 2026/notebooks/
  • RPi at takano-lab: 100.88.131.75 (Tailscale), user: takanolab/takanolab
  • 1 epoch tested: 6157s/epoch, mAP50=10.9% (CPU-only test run)
  • Full training: Colab (manual — no public API for remote execution)

Reference Papers

10 PDFs uploaded to S3: mukhayyar-cloud/papers/


  • Edge AI Research
  • MukhayyarResearchBot handles literature search for this project