Greetings! I have a few questions for video processing using Edgellib.
In the documentation for the new Camera Class it states that the video frame is available for post processing.
How do you access the video frame? I would like to manipulate it in real time with Core Image filters in iOS.
Also, Is there a class that can read a local video file/asset, and then make the video frames available for post processing?
Can I pump my own frames through to FrameToSurface?
Do you have any benchmarks?