• Dominique posted an update 5 years, 1 month ago

    This Face tracking and yolo process in the same time. Two camera, one for tracking, one for yolo.

    • Very cool! Can you share how that is done in MRL? Would appreciate very much

      • You need MRL with yolo service. MRL team are working for yolo integration in MRL.

        For usage this is an example:

        from java.lang import String
        from org.myrobotlab.service import Runtime
        from org.myrobotlab.service import OpenCV
        from time import sleep
        from shutil import copyfile
        import os, sys

        list=[]
        cpt=0

        opencv = Runtime.createAndStart(“opencv”,”OpenCV”)
        yolo = Runtime.createAndStart(“yolo”,”Yolo”)
        ImageYolo = Runtime.createAndStart(“ImageYolo”, “ImageDisplay”)

        opencv.capture()
        sleep(5)

        # ##############################################################################
        def displayPic(pic):
        r = ImageYolo.displayFullScreen(pic)

        # ##############################################################################
        def takeFotoForYolo():
        print “Photo…”
        os.chdir(“d:\myrobotlab”)

        photoFileName = opencv.recordSingleFrame()
        print photoFileName

        os.remove(“image.jpg”)
        os.rename(photoFileName,”image.jpg”)
        copyfile(“image.jpg”, “d:\myrobotlab\yolo\image.jpg”)
        sleep(0.1)

        yolo.execYolo()

        # ##############################################################################
        def statisticResult():
        global list
        NbElement=0

        print “statistique…”
        res = yolo.StatisticResult()

        if res == True:
        # Ici il y a obligatoirement des objects
        list=[]
        file = open(“statistics.txt”,”r”)

        for ligne in file:
        list.append(ligne)
        file.close()

        NbElement=len(list)
        print “Number of recognized elements:”, NbElement

        for val in list:
        print val
        else:
        print “Rien trouvé !!”

        # ##############################################################################
        def analyseResult():
        global list
        NbElement=0

        print “analyse…”
        res = yolo.AnalyseResult()

        if res == True:
        # Ici il y a obligatoirement des objects
        list=[]
        file = open(“finalresult.txt”,”r”)

        for ligne in file:
        list.append(ligne)
        file.close()

        NbElement=len(list)
        print “Number of recognized elements:”, NbElement

        for val in list:
        print val
        else:
        print “Rien trouvé !!”

        ##################################################################################
        # Timer …
        ##################################################################################
        def refresh(timedata):
        global cpt

        cpt += 1
        if cpt==1:
        takeFotoForYolo()
        else:
        ImageYolo.closeAll()
        displayPic(“d:\myrobotlab\yolo\predictions.jpg”)
        statisticResult()
        sleep(0.1)
        analyseResult()
        cpt=0

        timer = Runtime.start(“Timer”,”Clock”)
        timer.setInterval(20000)
        timer.addListener(“pulse”, python.name, “refresh”)
        timer.startClock()

    • What is YOLO and what does it do? Your face tracking is very good.

      • With yolo, you can do this: http://myrobotlab.org/content/yolo-dnn-support-now-opencv

        For face tracking, i use tracking service from MRL

        oeilG = Runtime.create(“oeilG”, “OpenCV”)
        oeilG.setFrameGrabberType(“org.myrobotlab.opencv.SarxosFrameGrabber”)
        oeilG = Runtime.start(“oeilG”, “OpenCV”)
        oeilG.setCameraIndex(0)
        tracking = Runtime.createAndStart(“tracking”, “Tracking”)

        pid = tracking.getPID()
        pid.setPID(“x”, 5.0, 5.0, 0.1)
        pid.setPID(“y”, 5.0, 5.0, 0.1)

        # optional filter settings
        opencv = tracking.getOpenCV()

        # connect to the Arduino ( 0 = camera index )
        tracking.connect(oeilG, head.rotHead, head.neck)
        opencv.broadcastState()
        sleep(1)

        tracking.faceDetect()

    • That’s cool, def need to check out the code for this thanks for sharing