-
Notifications
You must be signed in to change notification settings - Fork 7.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib: pixel: towards gstreamer/ffmpeg/opencv #86671
base: main
Are you sure you want to change the base?
Conversation
To test the samples on the native simulator:
|
72a038e
to
ca0261e
Compare
32308e3
to
8585aa9
Compare
lib/pixel/resize.c
Outdated
size_t src_i = src_w * bits_per_pixel / BITS_PER_BYTE; | ||
size_t dst_i = dst_w * bits_per_pixel / BITS_PER_BYTE; | ||
|
||
memcpy(&dst_buf[dst_i], &src_buf[src_i], bits_per_pixel / BITS_PER_BYTE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More of a suggestion:
If it is guaranteed that src and dst addresses never overlap, then we are good but for cases where src and dst addresses might overlap, i have seen memcpy leading to pixel data corruption So it is good to either handle such scenario or use safe version of memcpy (memcpy_s) which inherently checks for address overlap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function could be called to shrink a buffer in-place, and this would lead to a bug!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
memcpy_s()
seemed like unsupported, so I switched to memmove()
to allow incrementally improve performance as use-cases shows-up. This would only copy 1~3 bytes in typical usage anyway I think.
drivers/video/video_sw_isp.c
Outdated
} | ||
|
||
#ifdef CONFIG_POLL | ||
static int video_sw_isp_set_signal(const struct device *dev, enum video_endpoint_id ep, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a suggestion:
Seems like one of the input parameter (video_endpoint_id ep) is not getting used in function, if we do not have plan to use it, it is good to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was missing input validation on it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a side note. I think all things related to video_endpoint_id
should be removed from the video framework. Video port / enpoint is already described in the devicetree and up to now, all the video_endpoint_id
parameters passed in video functions are not used at all, just served as a check (which is not necessary). I plan to make a PR to remove them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A no-op everywhere in current drivers that is true... This PR uses multiple endpoints, should I try to make it single-endpoint (i.e. 1 input device, 1 output device)? Did you already have some plan you wanted to try for this scenario?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I have.
should I try to make it single-endpoint (i.e. 1 input device, 1 output device)?
It depends on the complexity of the real hw device that we should expose it as a single or multiple devices or not. But I think it's not really related to the discussed point here.
In the v4z framework, the video_endpoint_id
parameter put in each API function has nothing to do with its meaning. There are current drivers such as mipicsi2rx which also has multiple endpoints : ep_in is the camera sensor and ep_out is the csi (dma) but why it does not need to distinguish the ep direction like the ISP (as in the current PR) ?
So, the difference is not because a device has multiple endpoints (direction) or not, but it's because the ISP (and other devices like the PxP) falls into another catergory which (Linux) calls m2m device. For this kind of devices, the driver needs two separate buffer queues and hence needs to distinguish between input (which Linux falsely call "output") and output (which Linux calls "capture") side. To support this kind of devices, we can do as follow depending on each API:
- get/set_format() : add a "type" (input or output) field in struct
video_format
- enqueue/dequeue() : add a "type" field in struct
video_buffer
- set_stream() : add a type parameter in the function API. I see that you didn't implement this callback. But streaming need to be done separately for each side.
- set/get_ctrl() : nothing to do as control IDs are already different for input and output (and also, we rarely has to set controls on both input and output)
- set/get/enum_frmival() : no need.
I will push the PR this weekend. I plan to do more reworks in the framework to support m2m device as we also need it for rewriting PxP driver as said in my last comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the insight, I understand the underlying challenges a little bit better.
I do my best to make these drivers follow the APIs you introduce. :) Good thing that the PRs got split!
P.S.: m2m
reminds of this other PR (few months ago):
fcd4b52
to
9efe118
Compare
force-push:
|
force-push:
|
force-push:
The global goal is to make defining a new format of operation much less bureaucratic: only a line-conversion function, wrap it in a frame-conversion function (stream), and declare a top-level macro for it. This also reduces the API surface to just the essential:
|
2cd0a24
to
1c4bd3a
Compare
force-push:
|
bbde553
to
6dfddbc
Compare
force-push:
The documentation is somewhat better now, but needs a proper top-level introduction outside of Doxygen. I hope these force-push are not distracting. I add them for the sake of avoiding extra PRs after the merge, in the meantime that review happens. I'll try to batch the changes more than currently done to reduce CI load and notifications. |
d2fa47a
to
14379b6
Compare
b4765ed
to
f45a634
Compare
The "pixel" library aims facilitating the implementation of all image processing tasks, such as pixel conversion, with a focus on low-resource environments. The processing functions scale down to per-pixel "kernels" to line-based conversion to full streams of several frames. Signed-off-by: Josuah Demangeon <me@josuah.net>
The newly introduced lib/pixel features utilities that help composing video pipelines together for the purpose of stream processing, as well as debug utilities. Signed-off-by: Josuah Demangeon <me@josuah.net>
The tests use data generated by the ffmpeg command line utilty in order to avoid bias by having the same person implementing the tests and the source code. Signed-off-by: Josuah Demangeon <me@josuah.net>
Add "Pixel library" area, which covers the newly introduced lib/pixel and associated tests and samples. Signed-off-by: Josuah Demangeon <me@josuah.net>
force-push:
New features will be proposed on other branches to reduce review burden, now using 1240bfd as stable checkpoint. |
This is a proposed implementation for:
Implementation notes:
pixel_<a>_to_<b>()
orpixel_<operation>_<a>()
, where<a>
and<b>
are pixel formats name withframe
/line
/hist
/avg
suffix (no reliance on<zephyr/drivers/video.h>
or<zephyr/drivers/display.h>
formats)static inline
: to allow the compiler to optimize the constants away and build operations that work on whole-lines without function call inside.VIDEO_SW_GENERATOR
to build the test data was interesting?