Road Mapping

Mapping new constructed road with drones

In this case study, we show an application that can map a road with drones through iterative road tracing. The application begins at the start of the road. In each iteration, the application acquires a photo at the current location and computes the road position and direction through a convolutional neural network. Then, the application moves 6 meters along the road and starts the next iteration. We show a demo of this application in the following video (Note that the speed is 2x for the first half of the video.)



This application shows an example on how to write simple end-to-end application with BeeCluster. We upload the application code in the casestudy/road_mapping folder. The folder containts the code required to train the neural network model, the training dataset, and the pre-trained model.

Walking through the code

We show the main function of this application below,

bc = beecluster.Session(appID = "trace_road")
loc = initial_position = (-30,0,10)
yaw = initial_yaw = -60
stepsize = 6.0
# Active Sensing Loop (trace 25 steps)
for i in range(1,25):
ret = bc.newTask(take_photo, bc, loc, yaw, stepsize).val
loc, yaw = parse_result(ret, loc, yaw, stepsize)
print("iteration: %d road location: (%.2f, %.2f) heading: %.2f" % (i, loc[0], loc[1], yaw))
bc.close()

As you may know from the programming guide, we use beecluster.Session() to create a session connecting the python client to the BeeCluster server.

Being the core of this application is a active sensing loop. Inside each iteration of the loop, we create a BeeCluster task based on the take_photo() function. The take_photo() function takes the session handler (bc) location (loc) and the headling angle (yaw) as inputs and returns a photo at location loc with heading angle yaw. Then, this photo is passed to the parse_result() function which extracts the road from the photo and move the current location toward the road direction with a fixed step size.

We show the code inside the take_photo() function below.

def take_photo(bc, loc, yaw):
# set the yaw of the drone
bc.act("set_yaw_" + str(yaw))
# fly to loc
bc.act("flyto", loc)
# take photo and block the execution by accessing the 'val' member.
ret = bc.act("take_photo_fast").val
return ret

The take_photo() function involves three act() actions - setting the yaw of the drone, flying the drone to the target location, and taking a photo. Note that the act() functions are async (non-blocking).

note

The action names here are tentative. So far, the runtime system passes these action names to the drone drivers directly. A better naming, e.g., "drone/sensor/rgb_camera/capture", is under development.

Speculative Execution

The above active sensing loop runs sequentially - each task depends on the result of its previous task. This sequential nature makes it hard to speed up the execution by simply parallelizing the tasks and adding move drones.

BeeCluster introduces a feature - speculative execution, which provides an option to the developers so that they could choice to speed up the exeuction of the active sensing loop with the cost of adding more drones.

In speculative execution, BeeCluster forecasts the future requests of an application. When there are spare drones in the system, BeeCluster dispatches drones to the locations of the predicted requests, a form of speculative execution. We show an illustration below,

This strategy overlaps the flying time of one drone with sensing time of another and thus reduces the total execution time.

We show the drone traces with one and two drones in the mission of mapping a newly constructed trail at the Magazine Beach in Cambridge MA.

A short demo on how this works on two real drones is shown below,