Replies: 1 comment
-
Hi, load_external_weights() is for external weights. These are stored in DRAM and streamed into the FINN accelerator by a DMA core (similar to the DMAs for primary input/output generated by FINN in a full "ZynqBuild"). load_runtime_weights() is for runtime-configurable weights. These are stored in a FINN memstreamer IP with an AXI-lite interface inside the FPGA and streamed to the MVAU from there. The memory resource used for the weights can be set to LUTRAM, BRAM, or URAM. For LUTRAM and BRAM it is optional to (re-)load the weights as they are contained within the bitstream. For URAM you must load them as the URAM content cannot be set with the bitstream. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone.
I have a question about finn's runtime_weights.I am currently trying to make partial reconfigurable for the cnn accelerator generated by finn. My block design is as shown in the figure (where StreamingDataflowPartition_1 is my PR part)
When I burned the full_bitstream, everything functioned normally and the model accuracy was in line with expectations.
However, when I burned the partial_bitstream, the model output results did not meet expectations, so I suspected that it was necessary to add the parameters of the model.
And I found that finn provided a function called
load_external_weights()
, in which the file format he required was a.npy
file, so I found weight.npy in the MatrixVectorActivation folder generated by finn, as shown in the figureand threw it into the corresponding runtime_weights folder, however, this does not seem to have any effect, the parameters do not seem to be loaded correctly, causing the final output of the model to still not be as expected.
And my question is
Thanks.
finn_cnn.zip
Beta Was this translation helpful? Give feedback.
All reactions