- download packages
- download exercise 3
- Get a map of the room
- Save the map
- Run the acml simulation and try to move the robot around by giving him goals
- Determine 5 goals locations in the map (can use clicked points)
- Create a script that drives robot to the 5 location sequentially
- End when the robot has reached the final location
roslaunch exercise3 rins_world.launch # Start the 3D simulation
# Build the map of the working environment of the robot.
# Using gmapping package which builds a map based on the laser scan (Kinect)
roslaunch exercise3 gmapping_simulation.launch
# View the map that we are building
roslaunch turtlebot_rviz_launchers view_navigation.launch
# Control the robot with the keyboard via terminal
# Move slowly, stop frequently and rotate around the axis
roslaunch turtlebot_teleop keyboard_teleop.launch
# Once you are satisified with the map, save it
r
Once the map is built we are ready to let the robot drive itself
- Close all the running programs
roslaunch exercise3 rins_world.launch
# Start the localitzation node amcl (Adaptive MonteCarlo Localization pkg)
# Edit the launch file to use the map that we have created
# <arg name="map_file" default="$(find exercise3)/maps/map1.yaml"/>
roslaunch exercise3 amcl_simulation.launch
# Open a rviz visualizer of the robot
roslaunch turtlebot_rviz_launchers view_navigation.launch
Script that sends goals to the robot, SimpleActionClient
to communicate with the SimpleActionServer
that is available in move_base
.
rostopic echo /move_base_simple/goal
rostopic echo --noarr /map
roslaunch exercise3 rins_world.launch
roslaunch exercise3 amcl_simulation.launch
roslaunch turtlebot_rviz_launchers view_navigation.launch
rosrun exercise4 breadcrumbs
rosrun exercise4 face_localizer_dnn
rosrun exercise4 face_localizer_dlib # Face detection
Debugging (ros console)
# Debug