* moving raptor bump compiles and raptor mode appears hovering with RAPTOR seems to work Using Raptor to execute offboard commands works (using multirobot f03825a5795a77c5a095f799eeb8e0b646fe7176 to feed the trajectory_setpoint). Requires more testing simplified rotmat runtime inference frequency multiple arming request response reflects actual readiness adjusting to fit IMU gyro ratemax relaxing control timing warning thresholds for SITL Using mode registration to signal if offboard commands should be forwarded to trajectory_setpoint instead of just hardcoding vehicle_status.nav_state == vehicle_status_s::NAVIGATION_STATE_OFFBOARD adopting new "request_offboard_setpoint" in raptor module replace offboard seems good mc_raptor: overwrite offboard parameter separate raptor config addendum Raptor off by default RAPTOR readme Loading raptor checkpoint from tar works. check if load was successful refactoring: cutting out the pure C interface to allow direct testing of the policy input/output behavior from the file, without fully loading it into memory first adapter not needed anymore ripping out test observation mode (not used in a long time) fixing warnings bump RLtools to fix the remaining warnings Loading RAPTOR checkpoint from sdcard seems to work on FMU-6C embedding Raptor policy into flash works again also printing checkpoint name when using the embedded policy cleaner handling of the checkpoint name back to reading from file ripping out visual odometry checks cleaner more debug but no success bump rlt bump pre next rebase we can publish the no angvel update because we latch onto it with the scheduled work item anyways this kind of runs on the 6c still bad SIH almost flying saving stale traj setpoint yaw new error. timestamp not the problem anymore bump rlt; SIH works with executor shaping up bumping blob (include tar checkpoint) cleaning up fixing formatting update readme * moving raptor bump compiles and raptor mode appears hovering with RAPTOR seems to work Using Raptor to execute offboard commands works (using multirobot f03825a5795a77c5a095f799eeb8e0b646fe7176 to feed the trajectory_setpoint). Requires more testing simplified rotmat runtime inference frequency multiple arming request response reflects actual readiness adjusting to fit IMU gyro ratemax relaxing control timing warning thresholds for SITL Using mode registration to signal if offboard commands should be forwarded to trajectory_setpoint instead of just hardcoding vehicle_status.nav_state == vehicle_status_s::NAVIGATION_STATE_OFFBOARD adopting new "request_offboard_setpoint" in raptor module replace offboard seems good mc_raptor: overwrite offboard parameter separate raptor config addendum Raptor off by default RAPTOR readme Loading raptor checkpoint from tar works. check if load was successful refactoring: cutting out the pure C interface to allow direct testing of the policy input/output behavior from the file, without fully loading it into memory first adapter not needed anymore ripping out test observation mode (not used in a long time) fixing warnings bump RLtools to fix the remaining warnings Loading RAPTOR checkpoint from sdcard seems to work on FMU-6C embedding Raptor policy into flash works again also printing checkpoint name when using the embedded policy cleaner handling of the checkpoint name back to reading from file ripping out visual odometry checks cleaner more debug but no success bump rlt bump pre next rebase we can publish the no angvel update because we latch onto it with the scheduled work item anyways this kind of runs on the 6c still bad SIH almost flying saving stale traj setpoint yaw new error. timestamp not the problem anymore bump rlt; SIH works with executor shaping up bumping blob (include tar checkpoint) cleaning up fixing formatting update readme updating gitignore * fixing format and declaring submodules as cmake dependencies * adding uORB message documentation * fixing comment alignment * Adding option to restrict mc_raptor to not listen to the trajectory_setpoint (use the position and yaw at activation time as reference instead) * bump RLtools; relax timing thresholds and adding real world readme * smooth traj tracking performance * Measuring trajectory_setpoint timing (providing stats in raptor_status); reverting accidental .gitignore modification * More ideomatic way of setting the path to the policy checkpoint * Reset trajectory_setpoint on raptor mode activation * Adding internal trajectory generation (feeding trajectory_setpoint over Mavlink is too noisy). Quite agile trajectory tracking, good performance * stable flight * Update msg/versioned/RaptorInput.msg Co-authored-by: Hamish Willee <hamishwillee@gmail.com> * adopting message formatting conventions * sort raptor.px4board * Archiving RegisterExtComponentRequestV1.msg * Add message versioning for VehicleStatus v2 and RegisterExtComponentRequest v2 * fixing formatting * making internal reference configurable via command * RAPTOR docs wip * raptor internal reference documentation * Finishing RAPTOR docs first draft * adding logging instructions * Fixing missing command documentation test error * fixing format * adding motor layout warning * raptor minimal subedit - prettier, images etc * Improve intro * Fix up Neural_Networks version * Mentioning "Adaptive" in the RAPTOR documentation's title * Adding clarifications about the internal reference trajectory generator * Removing "foundation policy" wording * Fixing new-line check * Removing redundant (evident through directory hierarchy) raptor_ from filenames * Unifying Neural Network docs (mc_nn_control and mc_raptor) under the "Neural Network" topic * Fix to standard structure * Making the distinction between mc_nn_control and mc_raptor more clear and fixing the comparison table * Removing trajectory_setpoint forwarding flag from external mode registration request and from the vehicle status * Trivial layout and wording fixes * fixing docs error --------- Co-authored-by: Hamish Willee <hamishwillee@gmail.com>
4.2 KiB
TensorFlow Lite Micro (TFLM)
The PX4 MC Neural Networks Control module (mc_nn_control) integrates a neural network that uses the TensorFlow Lite Micro (TFLM) inference library.
This is a mature inference library intended for use on embedded devices, and is hence a suitable choice for PX4.
This guide explains how the TFLM library is integrated into the mc_nn_control module, and the changes you would have to make to use it for your own neural network.
::: tip For more information, see the TFLM guide. :::
TLMF NN Formats
TFLM uses networks in its own tflite format. However, since many microcontrollers do not have native filesystem support, a tflite file can be converted to a C++ source and header file.
This is what is done in mc_nn_control.
The tflight neural network is represented in code by the files control_net.cpp and control_net.hpp.
Getting a Network in tflite Format
There are many online resource for generating networks in the .tflite format.
For this example we trained the network in the open source Aerial Gym Simulator. Aerial Gym includes a guide, and supports RL both for control and vision-based navigation tasks.
The project includes conversion code for PyTorch -> TFLM in the resources/conversion folder.
Updating mc_nn_control with your own NN
You can convert a .tflite network into a .cc file in the ubuntu terminal with this command:
xxd -i converted_model.tflite > model_data.cc
You will then have to modify the control_net.hpp and control_net.cpp to include the data from model_data.cc:
- Take the size of the network in the bottom of the
.ccfile and replace the size incontrol_net.hpp. - Take the data in the model array in the
ccfile, and replace the ones incontrol_net.cpp.
You are now ready to run your own network.
Code Explanation
This section explains the code used to integrate the NN in control_net.cpp.
Operations and Resolver
Firstly we need to create the resolver and load the needed operators to run inference on the NN.
This is done in the top of mc_nn_control.cpp.
The number in MicroMutableOpResolver<3> represents how many operations you need to run the inference.
A full list of the operators can be found in the micro_mutable_op_resolver.h file.
There are quite a few supported operators, but you will not find the most advanced ones.
In the control example the network is fully connected so we use AddFullyConnected().
Then the activation function is ReLU, and we AddAdd() for the bias on each neuron.
Interpreter
In the InitializeNetwork() we start by setting up the model that we loaded from the source and header file.
Next is to set up the interpreter, this code is taken from the TFLM documentation and is thoroughly explained there.
The end state is that the _control_interpreter is set up to later run inference with the Invoke() member function.
The _input_tensor is also defined, it is fetched from _control_interpreter->input(0).
Inputs
The _input_tensor is filled in the PopulateInputTensor() function.
_input_tensor works by accessing the ->data.f member array and fill in the required inputs for your network.
The inputs used in the control network is covered in MC Neural Networks Control.
Outputs
For the outputs the approach is fairly similar to the inputs.
After setting the correct inputs, calling the Invoke() function the outputs can be found by getting _control_interpreter->output(0).
And from the output tensor you get the ->data.f array.