Sign in to our website and become an InMoov member. It will allow you to post message, pictures, and share with others InMoov builders. It's just one click away!
Username or Email Address
Alan Timm posted an update 4 years, 8 months ago
Quick and simple demo of full body animation using a wireless x-y pendant. Lots of changes and stuff to share. I’ll refilm it later on this week when I have the next set of changes complete (with less noise and a clean desk)
rviz running in background displaying robot state. The basic animation is there, still have a fair bit of tuning to get it right.
This is the basic demo that I’m planning to run for SoCal Makercon this year.
Gael’s latest version of the eyes and eye mechanism look fantastic, so I’m printing out those parts this week to upgrade the eyes, and I’ll be also looking at Matt’s eyelid mod.
Looks great. This is a good inspiration
Fantastic Alan… Best realistic movements so far, hands down!… I am working on a new inMoov which I am also planning some sophisticated movement management…
Did you create your own custom ROS package for this? I still use ez robot but would love to move up to ROS as it looks like much more can be done with ROS than either ez robot or MyRobotLab… I dabbled with ROS a while back so maybe it’s about time I have another look…
Hey Richard, thanks!
Each toolkit has it’s own benefits. There’s a nice community built up around InMoov+MRL, so it’s a great way to get started without writing everything from scratch. Another R.S.S.C. club member is building an InMoov using EzRobot, and I think there are some others on the forum as well.
I went with ROS because I wanted to leverage their tools and software ecosystem [rosarduino_serial, rviz, gazebo, moveit, etc] , but I also had to write all the robot interfaces from scratch. I’ll continue to post my ROS software stack here:
The repo is a little stale, but it’s enough to get started. I’ll push another update once I complete all the changes for the look demo.
That’s awesome Alan… Thanks
Great work, superb work full of fluidity,. Gestures follow like water in a river.
A very late comment to your great work using ROS in conjonction with InMoov.
I had shared your great video on my Google+ page right after you posted it.
Thanks for your video, I really enjoyed looking at InMoov moving so gracefully.
Just made first steps with ROS and found the github-links. So I have inmoov_model-master.zip and inmoov_ros-master.zip
Can you point me to a tuto where to place the files and probably even make it run?
Hey Juerg, Firstly, follow all the tutorials to install and set up ROS. Then, you should be able to follow the README.md from here:
After you get everything set up, you’ll most likely want to run the urdf model in rviz, using this command:
roslaunch inmoov_description display.launch #launches rviz
I’ll be posting another update within the next couple of weeks to keep github in sync with my latest.
Started the beginners tutorial and had a look at your linked tuto.
As I just recently installed ROS I got the kinetic version. Looks like I have to wait for a fitting Moveit version. But as the package looks rather big and complex I will anyway need to work myself through all the tutorials first.
Yep, I have a new SSD with Ubuntu 16.04 LTS and ROS Kinetic waiting for MoveIt to be released. They’re very close to their first release. Until then, you will have some time to familiarize yourself with the core ROS concepts, go through the tutorials, and control the urdf model in rviz.
If you haven’t found it already, Jason has published his Gentle Introduction to ROS online as pdf for free. https://cse.sc.edu/~jokane/agitr/
Spent some time to try to make inmoov run with kinetic . Found a few issues with scripts (renamed .py file, missing include vector, boost shortcomings but was finally able to run setup_parameters.py
As I run unbunto on my PC in a VirtualBox I assume it is correct that the servobus.launch complains about missing ports.
trying to run rviz tells me that I need to pass in a ‘model’ argument. Figured out this would be an urdf reference but I am currently stuck on that.
I haven’t moved everything to ROS Kinetic yet, but I’ll keep an eye out for the issues you’ve shared.
Unfortunately, the glue code in the ROS stack that I’ve published will only work with my robot, but it is a great starting point to integrate ROS into your bot.
You should be able to run the urdf model in rviz, then write python programs to manipulate the model until you have integrated the stack with your own bot.
You are correct, you need to pass the model parameter in to display.launch. I’ll push an update today that has a default so it can be run without arguments.
You may find it easier to get ahold of me from my facebook page
Nice work Alan,
Any idea when you will be releasing the demo? I’d love to try it out