Vision is, without doubt, the most crucial sense (data input) of an intelligent being for their knowledge and information to be based upon.
Mergen uses the default color model outputted by many cameras called Y′UV, instead of RGB or etc. In terms of hexadecimal, YUV is no different from YCbCr.
This component contains the following steps:
-
Camera : responsible for dealing with Android's camera API, reading image frames and passing them onto the next step. This step is platform-specific.
-
-
Method A: it divides an image frame into numerous segments and performs some kinds of analyses on some or all of them, including Image Tracing. Struct Segment defines a segment and type of path points in it. Then these analysed Segments go under an Object Tracking procedure. Translated from MyCV's Region Growing 4 and Comprehender. This method was deprecated in favour of a method that uses GPU instead of CPU. -
Method B : it uses GPU/Vulkan to detect edge pixels of an image frame. Then detects segments which is the opposite of the method A.
-
-
Perception : making sense of the segments' changes across frames.
BitmapStream: it stores image frames in BMP image format for testing in MyCV.VisualSTM: Visual Short-Term Memory; it stores shapes (segments) temporarily on a non-volatile (persistent) memory for debugging. Translated from MyCV's Volatile Indices 1.VisMemory: it stores specific shapes for learning.