Learn on the Fly
- 970 Downloads
In this study, we explore the biologically-inspired Learn-On-The-Fly (LOTF) method that actively learns and discovers patterns with improvisation and sensory intelligence, including pheromone trails, structure from motion, sensory fusion, sensory inhibition, and spontaneous alternation. LOTF is related to classic online modeling and adaptive modeling methods. However, it aims to solve more comprehensive, ill-structured problems such as human activity recognition from a drone video in a disastrous environment. It helps to build explainable AI models that enable human-machine teaming with visual representation, visual reasoning, and machine vision. It is anticipated that LOTF would have an impact on Artificial Intelligence, video analytics for searching and tracking survivors’ activities for humanitarian assistance and disaster relief (HADR), field augmented reality, and field robotic swarms.
KeywordsAI Machine learning Drone UAV Video analytics SLAM HADR
We often do things “on the fly” in everyday life. We gain experience without preparation, responding to events as they happen . We often learn new things in that way. For example, children learn to walk, talk, and ride a bike on the fly. Historical examples include Neil Armstrong landing the lunar module on the moon. Apollo 13 crews managed to return to Earth after an explosion. Network administrators responded to the first computer worm created by Robert Morris. More recently, epidemiologists have been fighting the COVID-19 coronavirus outbreak based on live data.
Learn-on-the-fly (LOTF) is a way of active learning by improvisation under pressure. It is not about learning how to fly, but rather how to learn quickly in challenging situations that may be mobile, remote, and disastrous, where other data-centric passive learning methods often fail. LOTF is related to classic “online modeling” or “adaptive modeling” methods such as Kalman Filter, Particle Filter, Recursive Time Sequence Models, and System Identification, which adapt to dynamic environments. LOTF aims to tackle more robust, complex problems such as human activity recognition from a drone video in a disastrous environment.
In addition, LOTF aims to build explainable AI models that enable human-machine teaming (including visual representation and visual reasoning) toward machine vision-like humans. LOTF can also incorporate lightweight machine learning algorithms such as Bayesian network. In this paper, the author overviews biologically-inspired LOTF algorithms in non-technical terms, including pheromone trails, structure from motion, sensory fusion, sensory inhibition, and spontaneous alternation. It is anticipated that LOTF will have an impact on artificial intelligence, in particular, video analytics for searching and tracking survivors’ activities for humanitarian assistance and disaster relief (HADR), augmented reality, and robotic swarms.
2 Pheromone Trails
It has long been known that social insects such as ants use pheromones to leave information on their trails for foraging food, leaving instructions for efficient routes, for searching, and for making recommendations. Similarly, Amazon’s retail website suggests similar products based on the items in a user’s online shopping cart. In practice, the term “pheromone” proves useful in describing behaviors such as trail formation in a sequence of spatial and temporal data.
3 Structure from Motion
Motion perception is our instinct for survival. It is a vital channel for us to map our world. To extract the motion features, we can use Optical Flow  to describe motion, direction, and strength in terms of motion vectors. Optical Flow assumes the brightness distribution on moving objects in a sequence of images is consistent, which is referred to as “brightness constancy.” We use the Horn-Schunck algorithm to minimize the global energy over the image. This algorithm generates a high-density of global optical flow vectors, which is useful for measurement purposes. We then use grid density to define the number of motion vectors in a frame. For example, we can plot a motion vector for every 10 pixels horizontally and vertically respectively.
4 Sensory Fusion
5 Sensory Inhibition
6 Spontaneous Alternation Behavior (SAB)
Creatures in nature commonly learn on-the-fly to adapt to changing environments. One instinctual behavior is randomization in order to search for alternative foraging paths or to avoid collision situations. When an ant gets lost, it will randomly wander until it hits a trail marked with pheromones. This pattern occurs in tests with many different animals. It is called spontaneous alternation behavior (SAB) . Spontaneous alternation of paths for an autonomous robot, a search engine, or a problem-solving algorithm can help to explore new areas and avoid deadlock situations. Spontaneous alternation is also a primitive strategy for collision recovery. Collisions can be found in many modern electronic systems in various fields, from autonomous driving vehicles to data communication protocols. There is a variation of the SAB strategy for collision recovery. When a collision occurs, the system spontaneously switches to different sensors or channels, or the system waits for random intervals and reconnects. The “back down” and reconnect process is similar to SAB, which solves the problem of deadlock. SAB is necessary for missions involving the search for and tracking of survivors for humanitarian assistance and disaster relief (HADR). This is true especially in cases where communication breaks down, the system collapses or runs into a deadlock, or when deep, extended searches for victims in missing spots is required.
In this study, we explore the biologically-inspired Learn-On-The-Fly (LOTF) method that actively learns and discovers patterns with improvisation and sensory intelligence, including pheromone trails, structure from motion, sensory fusion, sensory inhibition, and spontaneous alternation. LOTF is related to classic “online modeling” or “adaptive modeling” methods. However, it aims to solve more comprehensive, ill-structured problems such as human activity recognition from a drone video in a disaster scenario. LOTF helps to build explainable AI models that enable human-machine teaming, including visual representations and visual reasoning, toward machine vision. It is anticipated that LOTF will have an impact on Artificial Intelligence, video analytics for searching and tracking survivors’ activities for humanitarian assistance and disaster relief (HADR), field augmented reality, and field robotic swarms.
LOTF is an evolving approach that moves away from data-centric to sensor-centric, from rigid to adaptive, from unexplainable to explainable, from numeric to intuitive, and from curve-fitting to semantic reasoning. Our challenges include how can we scale up the system? How will we implement sensory adaptation as inhibition? Finally, how do we achieve a balance between the flexibility and efficiency of the algorithms?
The author would like to thank Sean Hackett and Florian Alber for data collection and prototyping, Professor Mel Siegel for his discussions and references on sensors and sensing, and Dennis A. Fortner for his organization. This study is in part sponsored by NIST PSCR /PSIA program and Northrop Grumman Corporation. The author is grateful to Program Managers Jeb Benson, Scott Ledgewood, Neta Ezer, Justin King, Erin A. Cherry, Isidoros Doxas, Donald D. Steiner, Paul Conoval, and Jason B. Clark for discussions, reviews and advice.
- 1.Wikipedia. On the fly. captured in 2020Google Scholar
- 3.Hull, C.L.: Principles of Behavior. Appleton-Century, New York (1943)Google Scholar
- 5.DARPA Grand Challenge: (2016) https://en.wikipedia.org/wiki/DARPA_Grand_Challenge
- 6.von Békésy, G.: Sensory Inhibition. Princeton University Press (1967)Google Scholar
- 8.Wigglesworth,V.B.: Insect Hormones. W.H. Freeman and Company (1970)Google Scholar
- 9.Cai, Y.: Ambient Diagnostics. CRC Press (2014 and 2019)Google Scholar
- 10.Horn, B.K.P, Schunck, B.G.: Determining Optical Flow. Artificial Intelligence, vol 17, pp. 185–203, Manuscript available on MIT server (1981)Google Scholar
- 11.Photogrammetry: (2020) https://en.wikipedia.org/wiki/Photogrammetry
- 12.OPENCV, Basic concept of the homography explained with code: (2020) https://docs.opencv.org/master/d9/dab/tutorial_homography.html
- 13.WikiPedia, Structure from Motion https://en.wikipedia.org/wiki/Structure_from_motion
- 16.FAST Corner detection: (2020) https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_fast/py_fast.html
- 17.SLAM, WikiPedia: (2020) https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
- 18.Hackett, S., Cai, Y., Siegel, M.: Activity recognition from firefighter’s Helmet. In: Proceedings of CISP-BMEI, Huaqiao, China (2019)Google Scholar
- 19.Lateral Inhibition, WikiPedia: (2020) https://en.wikipedia.org/wiki/Lateral_inhibition