|Event-cameras can be used for high frequency tracking of objects without motion-blur problems.|
|A. Glover and C. Bartolozzi, “Robust visual tracking with a freely-moving event camera,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 3769-3776, doi: 10.1109/IROS.2017.8206226.
A. Glover and C. Bartolozzi, “Event-driven ball detection and gaze fixation in clutter,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 2203-2208, doi: 10.1109/IROS.2016.7759345.
|Lightweight neural networks can be trained with event data to perform complex tasks such as human pose estimation.|
|N. Carissimi, G. Goyal, F. D. Pietro, C. Bartolozzi and A. Glover, “[WIP] Unlocking Static Images for Training Event-driven Neural Networks,” 2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP), 2022, pp. 1-4, doi: 10.1109/EBCCSP56922.2022.9845526.|
|Low-level processing of the event-stream (e.g. corner detection, and convolutions) can be achieved in real-time with asynchronous output.|
|A. Glover, A. Dinale, L. D. S. Rosa, S. Bamford and C. Bartolozzi, “luvHarris: A Practical Corner Detector for Event-Cameras,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 10087-10098, 1 Dec. 2022, doi: 10.1109/TPAMI.2021.3135635.
V. Vasco, A. Glover and C. Bartolozzi, “Fast event-based Harris corner detection exploiting the advantages of event-driven cameras,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4144-4149, doi: 10.1109/IROS.2016.7759610.
L. d. S. Rosa, A. Dinale, S. Bamford, C. Bartolozzi and A. Glover, “High-Throughput Asynchronous Convolutions for High-Resolution Event-Cameras,” 2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP), 2022, pp. 1-8, doi: 10.1109/EBCCSP56922.2022.9845500.
|With the right algorithm autonomous vehicles can recognise where they are even in extreme weather conditions.|
|A. J. Glover, W. P. Maddern, M. J. Milford and G. F. Wyeth, “FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day,” 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 3507-3512, doi: 10.1109/ROBOT.2010.5509547.
A. Glover, W. Maddern, M. Warren, S. Reid, M. Milford and G. Wyeth, “OpenFABMAP: An open source toolbox for appearance-based loop closure detection,” 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 4730-4735, doi: 10.1109/ICRA.2012.6224843.
M. Milford et al., “Condition-invariant, top-down visual place recognition,” 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 5571-5577, doi: 10.1109/ICRA.2014.6907678.
|Before Deep Neural Networks, learning could be achieved by on-line generation of models, such as Markov Decision Processes, allowing a robot to build their understanding of simple worlds and language.|
|A. J. Glover and G. F. Wyeth, “Toward Lifelong Affordance Learning Using a Distributed Markov Model,” in IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 1, pp. 44-55, March 2018, doi: 10.1109/TCDS.2016.2612721.
R. Schulz, A. Glover, M. J. Milford, G. Wyeth and J. Wiles, “Lingodroids: Studies in spatial cognition and language,” 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 178-183, doi: 10.1109/ICRA.2011.5980476.