Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] fix memory conflict for multi iteration in loop. #28056

Closed
wants to merge 13 commits into from
Closed
6 changes: 6 additions & 0 deletions src/plugins/intel_gpu/src/graph/loop.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1009,6 +1009,12 @@ void loop_inst::set_memory_in_body_network(cldnn::network::ptr body_network,
"impl_params layout size(", impl_layout.to_short_string(),
") should not exceed memory size(", updated_mem->get_layout().to_short_string(), ")");
// Set need_to_check_memory_to_set to false to set output memory even if the input node has static shape,
if (impl_layout.bytes_count() < updated_mem->get_layout().bytes_count()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this check condition creates redundant memory copies in cases where there is no memory conflict.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@timxu826 From the description, you seem to resolve memory conflicts issue?
If so, is this enough?
Shouldn't we prevent memory reuse for the problematic case?

auto& inst_engine = body_network->get_engine();
auto new_mem = inst_engine.allocate_memory(updated_mem->get_layout(), updated_mem->get_allocation_type(), true);
new_mem->copy_from(body_network->get_stream(), *updated_mem);
updated_mem = new_mem;
}
body_network->set_input_data(inst->id(), updated_mem, false);
// Update impl_params.output_layouts[0] to updated_mem's layout
inst->update_shape();
Expand Down
Loading