I’ve been working on converting a custom dataset-trained PyTorch model (SMP DeepLabV3+) to run on Hailo using the Dataflow Compiler, and I’ve hit a roadblock. Hoping someone here might have some advice!
Here’s what I’ve done so far:
Converted the model to ONNX format.
Followed the tutorial Jupyter notebooks:
Created the calibration .npy file.
Generated the .alls file with the correct normalization settings.
But when I got to the optimization step, I hit this error:
ValueError: Inputs and input nodes not the same length in layer combustion_model/concat1 - inputs: 4, nodes: 5
I reviewed the .hn file using Netron and noticed that the resize and concat38 layers both feed into the same node
I figured it was an issue with how the resize layer interacts with the 4 convolution layers. So, I manually added another concat layer to split things up.
But… the error shifted to the new layer:
ValueError: Inputs and input nodes not the same length in layer combustion_model/concat11 - inputs: 3, nodes: 4
After that, I thought maybe the problem was my calibration set. I suspected it might be because some images didn’t have all 4 segmentation classes present, so I created a subset where every image has all 4 classes.
Error still persists.
Does anyone have experience with this type of issue or know what might be going wrong? Any ideas would be super appreciated!
Addressing Concatenation Layer Issues in Model Optimization
Let’s break down the problem and solution:
Understanding the Issue
The error ValueError: Inputs and input nodes not the same length in layer ... indicates a structural misalignment in your model’s concatenation layers. This typically occurs when:
The ONNX graph has mismatched inputs for Concat operations
Intermediate layer outputs aren’t properly connected to the concatenation nodes
Recommended Solution
1. Model Analysis
First, examine your model structure using Netron, Focus on the concat1 and concat11 layers to verify their input connections.
2. Model Architecture Correction
If you find input mismatches, update your PyTorch model:
# Example of proper concatenation layer setup
class YourModel(nn.Module):
def forward(self, x):
# Ensure all branch outputs are properly defined
branch1 = self.conv1(x)
branch2 = self.conv2(x)
branch3 = self.resize(x)
# Verify all inputs are included
concat_output = torch.cat([branch1, branch2, branch3], dim=1)
return concat_output
I do not belive that the ONNX file has an error because I can Infere the model and the netron visualization does not show any issues like the .har file.