• juerg posted an update 4 months ago

    My Marvin got a new sensor, a Intel Realsense D415 to take depth images. I mounted it on the head with an angle that allows to see the floor in front of the robot. Mounting it on the head also allows for a good observation point and the head has a wide movement range. Adding an IMU into the head is probably the next thing to add to get more precise yaw and pitch values than by interpreting the servo settings.
    Thought we had an “add photo” button on this control but can’t see one at the moment?

    • Hello Juerg,
      The plan is to add the Intel RealSense in MyRobotLab, but I don’t when it will happen.
      Did you mount it above the head? or instead of the eyes?
      You should be able to post pictures again, there was an issue with WordPress.

      • Hi Gael
        I have to admit that I do not run my robot any more with MRL. It gave me too many problems with Arduino connections, it takes ages to start it up, it is overloaded with stuff I do not need. So I wrote my own servo task in python to control Marvins joints and have other python tasks to control the cart, read Kinect depth and cam images and add my “autonomy” moves.
        I have mounted the cam above the eyes under the cap. As of now I do not see an option to publish a picture here however.

        • Thanks for the reply,
          However, could you make a test for us with the latest version of MRL( work in progress), just to check if the the depth image works out of the box with the RealSense?
          http://build.myrobotlab.org:8080/job/myrobotlab-maven/

          • Tried openNi but does not show anything (but also does not provide controls to select the streaming device?). Do I need to use another service?

            • No it should be with the OpenNI service.
              Have you tried the “CAPTURE” button?
              Do you get this red error line at the bottom of the swinggui?
              “found 0 devices – Jenga software not initialized :P”

    • When starting openNi I get “initContext found 1 devices” in the bottom line.
      Clicking on “capture” in the service page I only get “starting user worker” but no image.
      Tried all the buttons, no image is shown. Would be nice to see what device is connected.

      • Do you have other imaging devices connected or only the Intel RealSense?
        Maybe OpenNI detects another imaging source.
        We could try to add manually the drivers in C:\myrobotlab\myrobotlab.1.1.173\libraries\native\OpenNI2\Drivers

        I did that for my Orbbec Astra, it worked partially.The depth image is working but the skeleton wouldn’t get activated when requested.
        But this gets us further than simple plug and play though as it is with the kinect 360.

        • hahh, still had my kinect plugged in. Without the kinect I get now this “Jenga software not initialized” message when starting the openNi service. I send you the pic of the mounted cam to your mail account.

          • IntelRealSense on InMoov

            Nice idea to keep the InMoov cap above the sensor.
            Okay so MRL.173 is missing the drivers for to get the RealSense working.
            I think these two files would need to be added in C:\myrobotlab\myrobotlab.1.1.173\libraries\native\OpenNI2\Drivers
            IntelRealSense driver

             

    • not a problem, I can rerun a test with an updated mrl version whenever it’s ready. Will partially be on vacation over the next weeks, so maybe no immediate response. Have a good time.

    • Hi all, how are you getting on Juerg? I was just looking at the Intel offer “RealSense™ Depth Camera D435 and Tracking Camera T265 bundled together for one great price, and get started with your next project today”
      And am feeling I am going to go for it. But really thinking how can I integrate the D435 without ruining the look. I was thinking high in the chest but on the other hand I would not be able to use the head movement for faster environment/room scanning and would need whole body to move more. But thinking it would be a good plan A and would be able to move later if need be.

      • Thought about the location too. I need to see the path my cart is moving on and first thought of mounting the cam on my cart. It would look nicer instead of the head location but the flexibility of the head makes is much more usable. From the picture you can see that the device is partially covered by the hat and it can be removed if needed. I might also decide to redesign the sculp cap itself and have the cam mounted in the head itself. Maybe not completely as it might get in conflict with the eye movement mechanism but at least partially and not sticking out as much as on the picture.
        I am busy now to implement the software (python) to identify obstacles/abysses in the path.

        • as a second thought: before you purchase the T265 make sure you understand how you can implement your odometry for the device. I read through the article and had no idea at the end of how I would have to accomplished this. My cart uses an encoder attached to one of its wheels and produces around 3 ticks per mm of robot movement.

          • Hi Juerg, Many thanks, really appreciated. I will go through more thought process. well done and many thanks for pathfinding with the Realsense. I believe that in time this will become the optimal solution over the Orbbec or the Kinnect.