Hailomz optimize failing during full quant analysis

The hailomz optimize fails during first epoch itself with the following error during full quant analysis.

TypeError: Exception encountered when calling layer 'lat_model' (type LATModel).
    
    in user code:
    
        File "/notebooks/hailo_dfc/lib/python3.10/site-packages/hailo_model_optimization/algorithms/lat_utils/lat_model.py", line 340, in call  *
            n_ancestors = self._native_model.flow.ancestors(lname)
        File "/notebooks/hailo_dfc/lib/python3.10/site-packages/hailo_model_optimization/acceleras/model/hailo_model/model_flow.py", line 31, in ancestors  *
            return nx.ancestors(self, source)
    
        TypeError: outer_factory.<locals>.inner_factory.<locals>.tf__func() missing 1 required keyword-only argument: '__wrapper'
    
    
    Call arguments received by layer 'lat_model' (type LATModel):
      • inputs=tf.Tensor(shape=(8, 640, 640, 3), dtype=float32)

What might be causing this?

Hey @shubham

This error is typically related to how our Hailo Model Zoo optimizes models, particularly with LATModel layers and their input handling during quantization. It often occurs during the Full Quantization Analysis step.

There are a few potential causes and solutions we can explore:

  1. Model Architecture Compatibility: There might be a mismatch between your custom model architecture and what our Hailo Model Zoo expects, especially if you’re using a newer version or a model with an unconventional layer configuration.
  2. LATModel Functionality: This error can arise when there are issues with tensor shapes or expected inputs for certain layers, such as misaligned input/output names in the Hailo optimization pipeline.

HI @omria
I was trying to generate .hef to run yolov8n_seg on hailo-8L. Is there any incompatibility in this combination?