Page 1 of 1

Video Processing Workflow

PostPosted: Mar 5, 2012 @ 9:45pm
by Don_Megga
Greetings! I have a few questions for video processing using Edgellib.

In the documentation for the new Camera Class it states that the video frame is available for post processing.
How do you access the video frame? I would like to manipulate it in real time with Core Image filters in iOS.

Also, Is there a class that can read a local video file/asset, and then make the video frames available for post processing?

Can I pump my own frames through to FrameToSurface?

Do you have any benchmarks?

Re: Video Processing Workflow

PostPosted: Jun 7, 2012 @ 11:41am
by edge
Hi,

Edgelib provides a cross-platform interface to individual camera frames, many of which have a different architecture than Edgelib's event-based model. This conversion step makes it generally impossible to safely mix Edgelib interfaces with native interfaces. If you require native iOS functions, you should implement your own AV stream as well. The alternative is to do the post-processing modifications in software, without iOS calls.

The OnCameraFrame callback provides the to your application. Any form of postprocessing can be performed in the same function. FrameToSurface is basically the "common" implementation of that postprocessing by converting and rescaling the input data to fit a Surface object with a pixel format matching the display. It's use is not limited to camera frames - you can indeed provide your own pixel data and have Edgelib do the conversion.

We unfortunately do not have benchmarks available.

- Marcel