The controller for this drone consists of a nominal (unsafe) controller, with a higher-level CBF safety filter ensuring that the command sent to the drone adheres to our safety conditions (barrier functions). Here, our nominal controller is a simple PD controller, commanding a velocity to keep the drone at a hover in the center of the room. The CBF takes in this velocity command, as well as the current state of the drone and the saber, and determines the optimal control input to meet our safety criteria:
For simplicity, we model the system as a point robot with direct control over the velocity (our control input). This assumption relies on the low-level PX4 velocity controller being able to track the reference velocity command at a reasonably fast rate. The flight controller takes this velocity command, converts it into an acceleration setpoint, which further is converted to thrust and attitude setpoints, and then eventually motor commands.
We use the standard CBF QP formulation:
\[ \begin{aligned} & \underset{u}{\text{minimize}} & & \|u - u_{\text{nom}}\|_{2}^{2} \\ & \text{subject to} & & L_{f}h(z) + L_{g}h(z)u \geq -\alpha(h(z)) \\ \end{aligned} \]
With the following barrier function:
\[ h = \begin{bmatrix} \vec{x}_{\text{d}} - \vec{x}_{\text{min}} \\ \vec{x}_{\text{max}} - \vec{x}_{\text{d}} \\ \|\vec{x}_{\text{d}} - \vec{x}_{\text{o}}\|_{2} - \Delta T (\vec{v}_{\text{o}} - \vec{v}_{\text{d}})^{T}(\vec{x}_{\text{d}} - \vec{x}_{\text{o}})(\|\vec{x}_{\text{d}} - \vec{x}_{\text{o}}\|_{2})^{-1} - r_{\text{o}} - r_{\text{d}} \end{bmatrix} \]
Here, the subscripts \( \text{d} \) and \( \text{o} \) denote values for the drone and the obstacle, respectively. \( r \) indicates a radius, and \( \Delta T \) is a time constant.
Solving this CBF is fast. For this demo, we ran our controller at 100 Hz, mainly due to our OptiTrack localization being limited to 120Hz. However, we could have upped this to extreme speeds: our CBF (QP matrix construction and solve) on average only took 0.3 ms to compute per iteration.
As a part of the SRC launch, the NAV Lab constructed a digital twin of each demo being presented. Below is the 3D Gaussian Splat, which was trained on a series of images captured around the demo space. This is interactive: click and drag to view the scene.