This post details some experiments spending one day playing with VINS-Mono library. To give some more detail, VINS-Mono is a cv based intertial estimator.

The original motivation for testing out this package was because I was looking at methods of localization underwater for the AUV I’m building on UBC-Subbots. This is of interest because normal RF, light based approaches like a lidar do not perform well underwater. This is due to the electro-magnetic absorbtion by the ionized water we are moving in. Therefore, the number of sensors that we can use on our AUV are very limited. Planning out the number of sensors I could insert into our system’s state (pose) estimator, I wondered if there was a possibility of calculating the matrix transformation between video frames to estimate the intertial change of our system.

This is when I cam across the work done by Hong Kong University of Science and Technology. Therefore I decided to do a “quick” test of the package with the following:

  1. Phidgets Spacial 3/3/3 IMU
  2. ELP 1080p Fisheye Camera

All running with ROS.


Sensors

  1. The first step was setting up the camera feed. To do so, I used the usb_cam package from ROS. Only configurations that I had to set were the video_device, image_width, image_height. Since the default namespace for usb_cam is usb_cam, this was all easily done by running rosparam set [parameter] [value].

eg. The input device was set by: rosparam set /usb_cam/video_device /dev/video1

Though the camera is a 1080p (1920x1080) camera, I chose to go with a 720p (1280x720) image size so that the data processing is slightly smoother down the line.

To run the node: rosrun usb_cam usb_cam. If you look at rostopic list at this point, you will be able to see the topics usb_cam/raw_image being broadcasted amongst the other information being published.

  1. Though the video was now successfully streaming, the distortion from the fisheye lens needed to be corrected. To do that I used image_proc package. Running it is straight forward, only thing was that I needed to specify usb_Cam’s namespace so that the node could subscribe to it.

To run image_proc run: ROS_NAMESPACE=usb_cam rosrun image_proc image_proc The corrected image than can be found under the topic /usb_cam/image_rect_color

  1. Next we want to import the IMU data. Fortunately for us Phidgets has a package released for this as well, it is called phidgets_imu. This package is straightforward, if installed correctly with the udev rules set, rosrun phidgets_imu phidgets_imu_node would broadcast the imu data under /imu/data_raw.

Running the estimator

  1. After installing the repo according to instructions, you will need to modify the launch/configuration files for things to run correctly.

  2. Afterwards you will need to modify some paths and topics under the config files. These can be found in VINS-Mono/config/euroc/. There are 2 YAML files where you will find the paramters imu_topic, image_topic, output_path, image_width and image_height. If you followed all the steps above, then these parameters will respectively be /imu/data_raw, /usb_cam/image_rect_color, 1280 and 1280 for imu_topic, image_topic, image_width and image_height. For output_path, just select as you please, just remember to also change line 70.

  3. Before you launch the estimator, in order for rviz to work correctly, you would need to give rviz a world reference. I just did this by using the tf package, rosrun tf static_transform_publisher 0 0 0 0 0 0 map world 10 for rviz to pick up.

  4. Next it would be recommended to also adjust the rviz configuration file: VINS-Mono/config/vins_rviz_config.rviz accordingly.

  5. Lastly just run things as shown on the repository README:

     roslaunch vins_estimator euroc.launch 
     roslaunch vins_estimator vins_rviz.launch 
    

Notes

If your rviz is having problems loading libgl unable to draw problem. Try disabling hardware acceleration.


Preliminary Thoughts

It took me a while to get through a lot of the configuration problems. Also it was hard to figure out what I was supposed to see. Unfortunately, though I could see the frames being tracked, the actual path estimates were far from accurate or precise. Now that things are working, I will be able to take a closer look into how to get the package performing correctly.