There was an error compiling the quantified model into a hex file

Hello friends in the community, I tried to convert a custom model into a Hef model, but encountered the following problem while compiling the Hef model:

[error] Mapping Failed (allocation time: 2m 52s)
Compiler could not find a valid partition to contexts. Most commom error is: Automri finished with too many resources on context_4 with 11/68 failures.

[error] Failed to produce compiled graph
[error] BackendAllocatorException: Compilation failed: Compiler could not find a valid partition to contexts. Most commom error is: Automri finished with too many resources on context_4 with 11/68 failures.

If anyone can help me answer, I would be extremely grateful.

1 Like

Hey @niyu6 ,

I understand you’re encountering resource allocation challenges during the partitioning phase, particularly with context_4 exceeding the available hardware resources.
While I can provide some general steps that often help resolve resource allocation challenges during compilation, to give you the most accurate and focused assistance, it would be really helpful if you could share:

  1. Your model file or architecture details
  2. The profiler output from:
hailo_dataflow_compiler profile --input-model your_model.tflite --output-directory profile_results
  1. A Netron visualization of your model’s structure

In the meantime, here are some common approaches that might help:

Optimize Your Model

Start by simplifying your model structure:

# Key optimization areas:
- Reduce layer complexity or use smaller kernel sizes
- Lower the number of channels in MAC-heavy layers
- Consider reducing input resolution
- Ensure full int8 quantization

Try enabling streaming to process larger layers in chunks:

hailo compile --input-model custom_model.tflite \
    --output-directory compiled_hef --enable-streaming

You can also limit context usage:

hailo compile --input-model custom_model.tflite \
    --output-directory compiled_hef --max-contexts 4

Best regards,
Omria

Thank you for your patient answer. My information is all on the server, but I accidentally didn’t save it. Many of my information and models are gone. Currently, I don’t have hardware, just a native test. I want to know why there is such a problem in context 4.

These are the possible reasons for such an error :

  1. Resource Overflow: Context 4 exceeds available compute or memory limits

  2. Poor Layer Distribution: Model layers not efficiently distributed across contexts

  3. Size Constraints: Network too large for single context capacity

  4. Layer Compatibility: Some layers don’t map well to Hailo architecture

  5. Resource Management: Inefficient context utilization or compiler settings

Let me know if you need more details about any of these issues.

At the same time, I can only execute the hailo compiler command without the parameters you provided


This is my onnx model, which has been simplified.

Thank you again for your answer. This is my quantified model,


but I can execute the hailo command on the terminal, but I cannot run any of the hailo_dataflow_compiler commands on the terminal.

Hey @niyu6 ,

Lets start with the DFC : if you can run the commands from the command line then you can compile the model there .

as for the ONNX , i See its large and needs to be cut into contexts by the compiler , can you provide the HAR file or ONNX to try it myself

Thank you, omria, I have already sent you a private message and attached the model address. Thank you again for your patient response.
Best regards,
niyu6