this is an implementation of visual odometry using stereo camera within CARLA simulator.
- download the latest carla release from their github repo releases page. I am using the "CARLA_0.9.11.zip" windows release. If you choose the windows release, you just need to unzip the downloaded file.
- Run the "WindowsNoEditor\CarlaUE4.exe" file to open the CARLA simulator server.
- place the script "stereo_visual_odometry" in the following location "WindowsNoEditor\PythonAPI\examples", and run it.
- Compute disparity of the left frame using opencv's
StereoSGBM
matcher. - Calculate depth using the focal length, the baseline distance and the calculated disparity map. [depth = Z_c = f*b/d]
- Features extraction is done using
cv2.goodFeaturesToTrack
and ORB describtor. - Matching is done using a brute force matcher. Matches are filtered according to a distance threshold to remove ambigous matches.
Using the calculated depth and the matches between the t-1
and t
frames the motion is estimated as follows:
-
Three values are inputed to the
cv2.solvePnPRansac()
solver:- objectpoints: 3D points in camera coordinates.
- imagepoints: corresponding 2D points pixel values.
- K: camera intrinsec parameters matrix.
The solver returns the rotation and the translation vectors.
-
Get the rotation matrix
R
from the returned vector usingcv2.Rodrigues
The previously calculated [R|t]
matrix are used to calculate the new trajectory point as follow:
RT = np.dot(RT, np.linalg.inv(rt_mtx))
new_trajectory = RT[:3, 3]
-
"Visual Perception for Self-Driving Cars" course by university of torronto on Coursera:
I couldn't have coded this without watching this course first. I even used some of my course-homework code in this.
-
CARLA Simulator Documentation