Raspberry pi5 + Hailo-8l: hailomz optimize hailomz optimize: ValueError: is not a valid MOConfigCommand

Dear all,

I am encountering an issue while optimizing a YOLOv8n model for the Hailo-8L using the Hailo AI Software Suite (version 2025-07:1) in a Docker container (host: d4002ee7b6ff, workspace: /workspace). Below are the details of the issue:

Problem Description: When running the optimization command:

hailomz optimize --hw-arch hailo8l --calib-path /workspace/processed_images/calib_set.npz --resize 640 640 --classes 2 --har yolov8n.har --model-script yolov8n.all yolov8n

I receive the following error:

ValueError: {'calibset_size': 128.0} is not a valid MOConfigCommand

This error occurs after updating the model script (yolov8n.all) to include calibset_size=128. Previously, I encountered a calibset_size=0 error when using checker_cfg in the model script.

Current Setup:

  • Model Script (/workspace/yolov8n.all):

    my_norm = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
    custom_resize_input = resize(resize_shapes=[640,640])
    model_optimization_config(calibset_size=128, dataset_size=128, batch_size=8, policy=enabled)
    
  • Calibration Data:

    • File: /workspace/processed_images/calib_set.npz

    • Content: 128 images, shape (128, 640, 640, 3), dtype uint8, values 0–255

    • Verified with:

      python3 -c "import numpy as np; d = np.load('/workspace/processed_images/calib_set.npz'); print(d.files); arr = d['calib_set']; print(arr.shape, arr.dtype, arr.min(), arr.max())"
      

      Output: [‘calib_set’] (128, 640, 640, 3) uint8 0 255

    • Size: 151 MB

  • Input Model: yolov8n.har (converted from ONNX with opset=11)

  • Environment: CPU-based, no GPU available (logs indicate fallback to CPU).

Steps Taken:

  1. Updated yolov8n.all to remove checker_cfg and add calibset_size=128, based on community suggestions.

  2. Confirmed that the .npz file is valid but learned that .npy is preferred for --calib-path.

  3. Attempted to create /workspace/processed_images/calib_set.npy using a Python script:

    import os
    import cv2
    import numpy as np
    
    img_dir = "/workspace/calib_images"
    out_file = "/workspace/processed_images/calib_set.npy"
    
    images = []
    for fname in sorted(os.listdir(img_dir))[:128]:
        if fname.lower().endswith((".jpg", ".jpeg", ".png")):
            img_path = os.path.join(img_dir, fname)
            img = cv2.imread(img_path)
            if img is None:
                print(f"❌ Не удалось загрузить: {fname}")
                continue
            img = cv2.resize(img, (640, 640))
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            images.append(img)
            print(f"✅ Обработано: {fname}")
    
    images = np.array(images, dtype=np.uint8)
    np.save(out_file, images)
    print(f"💾 Сохранено {images.shape} в {out_file}, dtype: {images.dtype}")
    
  4. Cleared SDK cache using rm -rf /tmp/hailo* /local/workspace/hailo_model_zoo/.cache.

Additional Information:

  • I noticed from the Hailo Community that calibset_size and dataset_size may not be supported in model_optimization_config. I also tried:

    my_norm = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
    custom_resize_input = resize(resize_shapes=[640,640])
    model_optimization_config(batch_size=8, policy=enabled)
    model_optimization_flavor(optimization_level=2, compression_level=0)
    

    but I have not yet tested this configuration due to the .npz vs .npy issue.

  • The community suggests using a directory of images for --calib-path as an alternative, which I plan to try next.

Questions:

  1. Is calibset_size supported in model_optimization_config for the 2025-07:1 SDK? If not, what is the correct syntax for yolov8n.all to ensure proper calibration with 128 images?
  2. Does the SDK strictly require a .npy file for --calib-path, or can .npz be used with a specific configuration?
  3. Are there additional parameters or environment settings (e.g., HAILO_LOG_LEVEL=debug) that could help diagnose why the SDK fails to recognize the calibration data?
  4. Could you confirm if the opset=11 for the ONNX model is appropriate for YOLOv8n on Hailo-8L?
  5. Could you provide the correct commands and parameters for MOConfigCommand in the yolov8n.all file to avoid the ValueError: {‘calibset_size’: 128.0} is not a valid MOConfigCommand error?

Request: Please provide guidance on the correct yolov8n.all syntax and whether .npy is mandatory. Additionally, any recommendations to avoid the ValueError: {‘calibset_size’: 128.0} is not a valid MOConfigCommand and ensure successful optimization and compilation to HEF would be greatly appreciated.

Thank you for your support!

Best regards, Bogdan

Hi @Bogdan_Bogdan

Welcome to the Hailo community. At DeGirum (a SW partner of Hailo), we built a cloud compiler to help users compile yolo models to Hailo devices. You can see details here: Early Access to DeGirum Cloud Compiler

1 Like

Hey @Bogdan_Bogdan ,

Welcome to the Hailo Community!

The error you’re seeing happens because of a syntax issue in your model script. The calibset_size parameter needs to be in the right place with the right format.

What’s Wrong and How to Fix It

Your model script is missing a key word. Instead of just putting calibset_size=128 anywhere, you need to structure it like this:

# Correct format
model_optimization_config(calibration, batch_size=8, calibset_size=128, policy=enabled)

Notice the word calibration at the beginning - that’s crucial! Without it, the parser gets confused about where your parameters belong.

File Format Issues

You mentioned having trouble with .npz files. Here’s the deal:

  • Works great: Directory of images (JPG/PNG files)
  • Also works: Single .npy file with your data
  • Doesn’t work: .npz files (the Hailo Model Zoo CLI doesn’t support these)

Since you already converted to .npy, you’re good to go!

Commands That Should Work

Try this with your .npy file:

hailomz optimize yolov8n \
  --hw-arch hailo8l \
  --har /workspace/yolov8n.har \
  --calib-path /workspace/processed_images/calib_set.npy \
  --resize 640 640 \
  --classes 2 \
  --model-script /workspace/yolov8n.all

Or if you prefer using a directory of images:

hailomz optimize yolov8n \
  --hw-arch hailo8l \
  --har /workspace/yolov8n.har \
  --calib-path /workspace/calib_images \
  --resize 640 640 \
  --classes 2 \
  --model-script /workspace/yolov8n.all

Answers to Your Specific Questions

Yes, it’s supported! Use it like this: model_optimization_config(calibration, batch_size=8, calibset_size=128, policy=enabled). The key is including that calibration keyword at the start.

Nope! You can use either a directory full of images OR a single .npy file. Just avoid .npz files since they’re not supported by the Model Zoo CLI.

Set the environment variable HAILO_LOG_LEVEL=debug. Don’t worry if you see warnings about optimization levels on CPU-only machines - that’s normal.

Yes, opset 11 works well with Hailo’s YOLOv8 examples. If you run into issues, you could try opset 12 or 15, but avoid opset 18 as it’s not widely supported yet.

Make sure your .all file looks like this:

my_norm = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])
custom_resize_input = resize(resize_shapes=[640, 640])

model_optimization_config(calibration, batch_size=8, calibset_size=128, policy=enabled)
model_optimization_flavor(optimization_level=2, compression_level=0)

Quick Checklist

  • Your calibration data should be either a directory with 128 images or a single .npy file with shape (128, 640, 640, 3)
  • Your .all script must include the calibration keyword in the model_optimization_config line
  • Make sure you’re using a Hailo Model Zoo version that supports Hailo-8L (v2.x branch)

That should get you up and running! Feel free to share your final .all script if you want me to double-check it before you try again.

1 Like