Tuesday, 19 February 2013

Ship Hull Inspection Diver Goggle Eye, Light, and Camera Tracking for AI ROV or AUV Training


Some things humans just do very well, and when we try to get robotic systems to do the same thing, we have just one hell of a time. Perhaps, you've watched the time it takes and the computational time/operations it takes for a software program and a vision system to pick out all the green M&Ms from a bowl, and yet a 3-year old can do this very quickly within moments of putting that bowl in front of them. Well, now consider a 3-D underwater environment where an AUV autonomous underwater vehicle is doing ship hull inspections.
Now then, there is a decent research paper on this topic; "Advanced Perception, Navigation and Planning for Autonomous In-Water Ship Hull Inspection," by Franz S. Hover, Ryan M. Eustice, Ayoung Kim, Brendan Englot, Hordur Johannsson, Michael Kaess, and John J. Leonard. The paper goes through various autonomous underwater hull inspection robotic systems and the various algorithms and strategies these devices use to ensure proper inspection worthy of the assignment.
In my proposal, I'd like to meld the work already done here and the survey of all these various robotic units to date, and add in the human expert ship hull inspection diver's inherent ability to see, understand, and inspect. Specifically, I'd like to watch every human move made to accomplish the task of ship hull inspection, and too, aircraft pre-flight inspection - the ladder because it too is needed but also because it is easier to do such things above ground than underwater.
Putting people at risk doing pre-flight inspections in icy outdoor conditions, or perhaps on the flight deck of an aircraft carrier is simply not necessary if we can train autonomous robotic systems to do it for us. In the case of the dangerous conditions of underwater inspections, we too need to replace human labor with robotics. Unfortunately, so far the process has been slow, humans are still so much better at it, and the human mind makes decisions very easily as to what needs closer inspection and inherently knows and can see what is already fine.
How do humans do this so well, how about from experience, memory, knowledge, and intuition (call it long term memory of past movements and observations). What if we used eye-tracking to see where the diver was looking, and coupled that in video time sequencing with where he or she was focusing the video camera, light, and how they moved in and out of position in real-time based on exactly what they were looking at. What if we did that, and allowed a little bio-inspired methodology back into the algorithmic computer science lab.
We need the divers talking to the programmers and engineers, because something is amiss and these systems aren't good enough yet to replace the human divers. Please consider all this and think on it.

0 comments:

Post a Comment