* files for PDF exercise
* Explicit call to python2 for systems that default to python3
* Description and comments fixes
[flight_plan] remove warning about dummy WP
Reformatted indents to 2 spaces
Added configs for ps4 gamepads, fixed typos (#68)
Update: for flight testing in the CyberZoo
Fix color gain in mav_course_exercise.xml airframe (#72)
[flight_plan] Re-measured the autoland border. It is not safe to exit the red sqaure so it auto-lands to prevent damage. (#73)
[Camera] Vision runs faster is unnecessary frames are not grabbed. (#74)
Co-authored-by: Matteo Barbera <matteo.barbera97@gmail.com>
* First comit of clean branch
* Added new drone airframe files (xml and model folder) for gazebo
* Renamed sdf file for bebop drone with stereo cams
* Added custom configuration and control panel files
On branch stereoCamerasNewDrone
Changes to be committed:
new file: conf/userconf/tudelft/ralphthesis2020_conf.xml
new file: conf/userconf/tudelft/ralphthesis2020_control_panel.xml
* Added new files necessary for a new bebop model with stereo vision
On branch stereoCamerasNewDrone
Changes to be committed:
new file: conf/airframes/tudelft/bebop_ralphthesis2020_stereo.xml
new file: conf/flight_plans/tudelft/ralphthesis2020_stereo_cyberzoo.xml
modified: conf/simulator/gazebo/airframes/bebop_w_stereo_cams.xml
modified: conf/simulator/gazebo/models/bebop_w_stereo_cams/bebop_w_stereo_cams.sdf
modified: conf/simulator/gazebo/models/bebop_w_stereo_cams/model.config
modified: conf/userconf/tudelft/ralphthesis2020_conf.xml
* Replaced cameras on bebop (front + down) with a stereo setup
On branch stereoCamerasNewDrone
Changes to be committed:
modified: conf/simulator/gazebo/models/bebop_w_stereo_cams/bebop_w_stereo_cams.sdf
* Solved low resolution in left camera
On branch stereoCamerasNewDrone
Changes to be committed:
modified: conf/simulator/gazebo/models/bebop_w_stereo_cams/bebop_w_stereo_cams.sdf
* Created new module "wedgebu" and added it to the airframe "bebop_ralphthesis2020_stereo.xml"
Also, used "make" to create tool for creating modules
On branch wedgeBug
Your branch is up to date with 'origin/wedgeBug'.
Changes to be committed:
modified: conf/airframes/tudelft/bebop_ralphthesis2020_stereo.xml
new file: conf/modules/wedgebug.xml
modified: conf/userconf/tudelft/ralphthesis2020_conf.xml
new file: sw/airborne/modules/wedgebug/wedgebug.c
new file: sw/airborne/modules/wedgebug/wedgebug.h
new file: sw/tools/create_module/create_module_ui.py
new file: sw/tools/create_module/datalink_ui.py
new file: sw/tools/create_module/event_ui.py
modified: sw/tools/create_module/files_create.py
new file: sw/tools/create_module/init_ui.py
new file: sw/tools/create_module/periodic_ui.py
* Added functions to copy images from left and right camera to global variables
On branch wedgeBug
Your branch is ahead of 'origin/wedgeBug' by 1 commit.
Changes to be committed:
modified: conf/airframes/tudelft/bebop_ralphthesis2020_stereo.xml
modified: conf/modules/wedgebug.xml
modified: conf/simulator/gazebo/models/bebop_w_stereo_cams/bebop_w_stereo_cams.sdf
modified: sw/airborne/modules/wedgebug/wedgebug.c
* Added function to compare individual pixels of images types (image_t)
* Tried to figure out why the C++ Mat object of the left gray pixels looks different to the left image
* A rough implementation and proof of the copy function (note: copy buf entries by incrementing pointer)
Changes to be committed:
deleted: Color_Image.jpg
modified: conf/airframes/tudelft/bebop_ralphthesis2020_stereo.xml
deleted: merged_stereo_image.jpg
modified: sw/airborne/modules/wedgebug/wedgebug.c
modified: sw/airborne/modules/wedgebug/wedgebug.h
modified: sw/airborne/modules/wedgebug/wedgebug_opencv.cpp
modified: sw/airborne/modules/wedgebug/wedgebug_opencv.h
* Finalized 1) image merging function for YY image and 2) function for saving gray images using openCV
* Created C++ function to create and return disparity image. Also changed gazebo world.
* 1) Added FPS like controlls for drone, using Xbox controller. 2) Added cropping image functionality of block matching function.
* SBM can take 16 bit images now, save function uses an image_t struct and a new type called "IMAGE_OPENCV_DISP" was added, specifically for OpenCV disparity images.
* Added openCV dependent openin and closing functions
* Added 1) openCV dependent dilation and sobel operation functions, 2) 3d point structure to image.h 3) Loop in wedebug.c for findin edge point closest to target.
* Added functions to convert world coordinates to camera coordinates (turned out to be a wrong implementation. Next commit will be the correct implementation)
* Added functions to convert world to agent to camera coordinates (and vice versa)
* Cleaned up coordinate conversion functions
* 1. Created new world for drone testing, 2. Created new waytpoints for drone testing, 4. Created state machine template in module 5. Debugged the "SBM_OCV" function 6. Filled in the state for "Initial position", "Move to start", "Start position" and "Goal reached" 7. Started to fill in the state "Move to goal" 8. Created function to get median distance of object in front ("median_disparity_to_point") 9. Created function to determine angle between drone and goal point ("heading_towards_waypoin") 10. Cleaned up code, removed non-essential variables (especially images) and added new variables.
* Converted edge detection algorithm into a function and cleaned code more. Also created more global variables.
* Added draft approach for "EDGE_DETECTION"
* Quick save
* First draft of code. The robot can evade a board in its way and reach the goal. Also increase periodic loop speed to 15 hz to counter the fast turning of the drone
* Included code for measuring metrics and displaying them. Also, cleaned the code some more.
* Cleaned code, added comments to describe code and variables and got rid of bug in function calculating heading direction.
* Removed bug in POSITION_EDGE state, where a new holding point was not set, after an obstacle was encountered. Also added various shapes (with birck texture) for testing.
* Added more shapes for experiment and added settings to change in GCS
* Changed camera settings to mimic those of Mathies et al. 2014. The drone can now avoid the standard metal board and the brick wall (type 2 2 x 2 meters)!
* Implemented state machine for control modes.
* Simplified code
* Cleaned code
* Added a flag to detect changes in the navigation mode used
* Merged background processes under the condition that the state machine is running in certain state
* Debugged time measurement of total time (metric 1)
* Cleaned code
* MAde debug options state number independent (i.e. they have been renamed to include the name of the state and not the number)
* Remove states: POSITION_INITIAL, MOVE_TO_START and POSITION_START from finite state machine. Also all associate variables have been removed.
* Changed "NUMER_OF_STATES" to "NUMBER_OF_STATES". Also increased "max_obstacle_confidence" from 3 to 5, to avoid fake obstacle detection.
* Added metric measurement when in direct flight.
* Added aluminum bar as object for goal marker and creted button for drone to move to start position and another button for drone to start the experiment. In the current example the drone goes around a 2x2 meter wall.
* Added saving flag and functions for report (in c and cpp file)
* Added median kernel dimensions as options in settings
* Made unused waypoints invisible in the GCS
* Added two more objects for investigation (with a larger hole for the drone to fly through). Also fine tuned the code.
* Commit before change of depth image.
* Added 16 bit background processes for greater precision. Also change some funciton in C++ file to better handle 16bit images
* Using depht works, but its not fine tuned and code is messy.
* Ibligatory commit so that I can checkout a different branch.
* Added depth image on which edge detection is now based on (used to be based on disparity). It works relatively ok with all shapes investigated.
* A save after the experiments for the first repor thave been finalized.
* After demonstration
* Added code to toggle between p2DWedgeBug and p3DWedgeBug.
* Removed original files that were created during the previous merge process.
* Added background processes that works with a 16 bit disparity image (i.e. more precise distance measurement)
* Made finite state machine more readable. Also refined find_best_edge coordinates2 function. Additionally, added function to got from image place coordinates to camera coordinate using depth as input. Lastly, depth is now measured in cm when considerring the thresholds.
* Successfully integrated edge search over depth image instead of disparity image (tested in direct control mode only). Next step is to implement this for the guided mode.
* Addressed bug: The baseline used by the drone model was not the same as in the c file. This has been changed such that the baseline in the drone model now resembles the one in the c file.
By addressing this bug of the Bug algorithm the drone can now judge distances accurately. This in turn results in the drone being able to accurately avoid the wall (2mxm) obstacle.
* Changed flight plan to start experiment atuomatically.
* Just a checkpoint
* Added tree models.
* Removed non used objects from world file.
* Reverted back to opencv_bebop
* Reverted back to original files for push to master
* Added new shapes for experimentation
* Deleted blacklisted
* Cleaned code before push
* Added missing varibales and added declarations.
* Run style bash script and renamed image enum name (IMAGE_INT16)
* Corrected typos and delted unused control panel config file.
* Add video capture functionality to video_capture module
The video_capture module can now capture timestamped image-sequences
using the buttons or setting in the GCS. Images are stamped using
their capture pprz_ts, which can be synchronized with the csv file
logger (todo).
(video_capture) Add video capture functionality to video_capture module
(video_capture) name images with current timestamp
(video_capture) Log capture timestamp instead of current time
(video_capture) Minor XML fixes
(video_capture) Store pprz_ts-stamped images in boot-time-stamped folder
Stamping images with calender time does not make sense in simulation,
which has an independent clock that runs at a different rate, and
does not have any concept of a calender time.
* Minor usability tweaks to file logger
* XML changes and new gazebo worlds after collecting datasets
Flight plan for orange pole dataset
Add flightplan to fly towards textured panel
XML changes after collecting dataset
Revert video capture and stream to front camera
* FIX datalink timeout in NPS
* minor fix: make colorfilter's color_count volatile
Shared between video thread and autopilot thread, so should technically
be volatile (although it seemed to work already).
* updated orange avoider to module state machine and added simple guided mode example (#46)
* updated orange avoider to module state machine and added simple guided mode example
* Apply suggestions from code review
Co-Authored-By: kirkscheper <kirkscheper@gmail.com>
* add mutex and update guided to return to arena
* Minor fix: increase freq of WP_MOVED telemetry message
This makes the waypoint-based orange avoider example easier to
follow.
* minor readability tweak for file logger
* Enable video_capture by default, place before object detector
* Fix OpenCV inconsistencies in NPS, update to 3.4.5 (#47)
With this fix, the opencv example module will link the OpenCV lib from the bebop_opencv submodule rather than the locally installed one. This prevents inconsistent behavior between the real drone and the simulator; previously a bug in OpenCV caused U and V channels to be swapped depending on the installed version. The Bebop and sim now both use OpenCV 3.4.5.
Commits:
* FIX color loop error
Not exactly sure what went wrong, but this code works on both sim
and bebop without segfaulting. Note that the YUV channel order problem
still exists in this commit.
* YUV test function
* WIP SEGFAULT Try to upgrade opencv_bebop
This commit segfaults on the Bebop, caused by OpenCV's cvtColor
function (confirmed with printf and gdb). Might be caused by this:
https://github.com/opencv/opencv/issues/12176 .
* WIP Update opencv_bebop
* WIP get compilation working for nps
* WIP Clean up makefile, update conf LDFLAGS
* Restructure opencv_bebop makefile
Also update pprz because paths and ldflags have changed.
* Minor opencv_bebop makefile changes
* Fix YUV channel order
* FIX Consistent OpenCV behavior on drone and in NPS, OpenCV 3.4.5
With this fix, the opencv example module will link the OpenCV lib
from the bebop_opencv submodule rather than the locally installed one.
This prevents inconsistent behavior between the real drone and the simulator;
previously a bug in OpenCV caused U and V channels to be swapped depending
on the installed version. The Bebop and sim now both use OpenCV 3.4.5.
* update opencv_bebop
* Remove opencv demo module from course airframe
* Separate grayscale and color examples
* Add joystick xml for SM600
Joystick should be set in Reflex XTR mode. Tested on Ubuntu 16,
but not on 18 yet.
* Change sm600.xml to G3-G4.5 axis order
Reflex mode behaves differently between Ubuntu 16 and 18... :/
* Add OptiTrack distance measuring tool
Usage:
1. Connect through ethernet to OptiTrack pc.
2. In dist.py, change id number (line 19) to rigid body id to track.
3. Run dist.py
4. Results are not saved, copy manually.
dist.py was written by Shushuai Li.
* Made find_object_centroid documentation more comprehensive (#50)
* GPS coordinates of CyberZoo (#51)
* GPS coordinates of CyberZoo
Update GPS coordinates of the CyberZoo along the green carpet
* Update conf/flight_plans/tudelft/course2019_orangeavoid_cyberzoo.xml
* Update conf/flight_plans/tudelft/course2019_orangeavoid_cyberzoo.xml
* Fixed capitalisation in xbox controlled and added Turnigy controller (#52)
* Fixed capitalisation in xbox controlled and added Turnigy controller
- Fixed capitalisation in conf/joystick/xbox_gamepad.xml
- Added Turnigy controller conf/joystick/turnigy_evolution.xml
* Added a newbie friendly version of xbox_gamepad
The xbox controller both sticks have springs in both axis
and in the configuration of the xbox_gamepad.xml the yaw and throttle are on the same axis.
This configuration has the yaw on the triggers making for a more pleasant experience.
* FIX missing #endif in abi_sender_ids.h
* Update airframes to use bebop_cam module
TODO: retune, test and tag in cyberzoo
* Update opencv_bebop submodule to latest master
Also updated linked libraries in cv_opencvdemo module. Should
be re-tested on a clean system.
* Further cleanup for pull request
* Change sm600.xml throttle range
Throttle did not seem to behave correctly. Calibration only makes
things worse (incorrect ranges and lots of noise). The current xml
seems to work with an UNcalibrated joystick, with the trim sliders
in the middle (including throttle).
* Change default joystick to sm600.xml
* Revert sm600.xml range
* Re-tune camera
Camera settings were changed after bebop camera overhaul.
* Further tuning of green detector
* Tag aircraft after test flights
* FIX 'land here' incorrect capitalization
Changed back to lower case to maintain compatibility with other
flight plans that may not have been merged into master.
* FIX reduce WP_MOVED update rate
Avoidance code used waypoint_set_ functions instead of wp_move_,
which lacks the downlink message to update the wp position in the
GCS. Implemented the missing move functions and used these instead,
reduced the update rate of WP_MOVED to its original, low rate.
* FIX file_logger error message referring to video_capture
* FIX file_logger for fixed wing ac
* FIX sm600.xml missing newline
* WIP Cleanup opti_dist
* [swarm] add formation control for rotorcraft
Based on Ewoud Smeur (INDI) and Hector Garcia de Marina (Formation
control) work. See
https://blog.paparazziuav.org/2017/12/02/pilot-a-super-rotorcraft
- several UAV can be control from the ground (using a joystick for
instance) by sending proper acceleration setpoint to the INDI guidance
controller
- configuration of the formation is done from a JSON file (2 example
files provided)
Some extra changes:
- update accel from IMU in GPS passthrough INS
- add accel setpoint to ABI messages
- by to accel setpoint in INDI guidance
- send ground reference from natnet2ivy
- possibility to have a joystick labeled 'GCS' with input2ivy
* [autopilot] refactor autopilot API for both firmwares
With this, fixedwing and rotorcraft are mostly using the same interface
for the autopilot. Some specific code and messages handling are still
firmware dependent.
A large part of the autopilot logic of the fixedwing is moved from
main_ap to autopilot_static.
More getter/setter functions are provided.
* [autopilot] update the rest of the system and the conf
for using the refactored autopilot API
* [autopilot] fix some errors from CI servers
* [actuators] use dummy actuators module to prevent autoloading
* Rename Bart_heliDD_INDI.xml to tudelft_bs_helidd_indi.xml
* Rename Bart_heliDD_pid.xml to tudelft_bs_helidd_pid.xml
* Delete tudelft_course2016_bebop_colorfilter.xml
* Delete tudelft_course2016_bebop_avoider.xml
* [actuators] don't autoload actuators when set to 'none'
* [gcs] autodetect firmware for strip mode button
You can use the Hat<Position>(<hat_name>) function to trigger events,
where <Position> is one of
Centered/Up/Right/Down/Left/RightUp/RightDown/LeftUp/LeftDown
Only one hat can be used, but the interface will allow multiple ones in
the future.
closes#460