-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPU] fix memory conflict for multi iteration in loop. #28056
base: master
Are you sure you want to change the base?
Conversation
@@ -1009,6 +1009,12 @@ void loop_inst::set_memory_in_body_network(cldnn::network::ptr body_network, | |||
"impl_params layout size(", impl_layout.to_short_string(), | |||
") should not exceed memory size(", updated_mem->get_layout().to_short_string(), ")"); | |||
// Set need_to_check_memory_to_set to false to set output memory even if the input node has static shape, | |||
if (impl_layout.bytes_count() < updated_mem->get_layout().bytes_count()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this check condition creates redundant memory copies in cases where there is no memory conflict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@timxu826 From the description, you seem to resolve memory conflicts issue?
If so, is this enough?
Shouldn't we prevent memory reuse for the problematic case?
build_jenkins |
build_jenkins |
build_jenkins |
Cause is shown in graph below.
The black, green, and dotted line donates the primitive dependency, memory buffer reuse, and memory conflict (at the second iteration) respectively.
The body of the loop is compiled as a separate model and is not aware the data reuse in backedge in multiple iteration.
Tickets: