Generated IP Block Behaves Differently in Vivado From Pytorch Model #1433
-
Project IntroductionHello everyone 👋, I’m currently developing a demo that implements basic numeral recognition on an FPGA. My setup uses the Cmod A7 15T FPGA development board, which houses the AMD Artix-7 xc7a15tcpg236-1. The interface consists of a 5x5 grid of buttons where a user "draws" a numeral. This grid serves as the input for a simple Neural Network (NN) that infers which digit was drawn. Below is an image of the board illustrating the intended use case:
I have created a GitHub repository with the source code here: MichalVarsanyi/FPGA-NN-Demo WorkflowAs a Windows user, I encountered limitations with certain
The IssueThe core issue is that the generated IP core produces seemingly random or incorrect inference results. Below is the pipeline visualization generated by the HLS model:
The following image shows the IP core instantiated in Vivado. The input dimensions match the model pipeline (16 bits * 25 inputs = 400 bits total), and the output dimensions are also consistent with the design.
During Vivado behavioral simulation, I send 400 bits to the input and monitor the output. I configured the output display settings to: Radix > Real Settings > Fixed Point > Signed with the binary point set after 20 bits (37-17=20). The simulation results show significant inaccuracies. I looked at the largest logit at the output of the NN to make the inference prediction.
You can find the testbench used to generate this waveform here: FPGA_NN_TB.txt Additional InformationTo verify my toolchain, I compiled a simplified "Pass-Through" NN where a single 16-bit input is passed directly to a 16-bit output. This test worked perfectly, with the input successfully appearing at the output one clock cycle later.
Because the simple model works, I believe the I welcome any suggestions on why I may be seeing this behaviour :)) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
|
This looks like a nice project! So you see unexpected predictions after Vivado simulation. Is the Vivado simulation result consistent with the HLS C Simulation result? And is the HLS C Simulation result consistent with the trained NN? If you haven't gone through these steps yet, you may want to make sure that you get agreement at both of those stages before looking at the Vivado behavioural simulation in more detail. These will help to isolate whether the issue is related to the integration of the HLS IP with the interfaces, or the configuration of the hls4ml HLS NN. For instance, errors due to quantization of model parameters may cause disagreement already at the HLS C Simulation comparison with the trained NN. |
Beta Was this translation helpful? Give feedback.
-
|
Adding to @thesps comment - there's generally two sources of "wrong" outputs in Vivado simulation.
To isolate your issue, I recommend:
|
Beta Was this translation helpful? Give feedback.




This looks like a nice project!
So you see unexpected predictions after Vivado simulation. Is the Vivado simulation result consistent with the HLS C Simulation result? And is the HLS C Simulation result consistent with the trained NN? If you haven't gone through these steps yet, you may want to make sure that you get agreement at both of those stages before looking at the Vivado behavioural simulation in more detail. These will help to isolate whether the issue is related to the integration of the HLS IP with the interfaces, or the configuration of the hls4ml HLS NN. For instance, errors due to quantization of model parameters may cause disagreement already at the HLS C Simulation compari…