Hey @engarlanded_boa,
It seems you’re facing memory management issues that result in either a bad function call or a segmentation fault after inference. Let’s break down the potential problems and their solutions:
- Resource Cleanup Problem: The destructor might not be properly releasing all resources during cleanup. Solution: Implement a dedicated cleanup function and call it before the program exits. This ensures all resources are properly deallocated.
- Threading Issues Problem: Background threads may not be correctly joined or terminated before program exit. Solution: Implement a context manager for your
HailoInferenceTestClass
. This approach ensures proper resource management and thread cleanup. - Memory Leaks Problem: Possible memory leaks leading to resource exhaustion over time. Solution:
a)Use a memory profiler to identify and address potential memory leaks.
b) Implement batch processing to manage memory more efficiently:
def predict(self, X):
results = []
for i in range(0, len(X), self.batch_size):
batch = X[i:i+self.batch_size]
results.extend(self._predict_batch(batch))
return results
- Buffer Management Problem: The dynamic allocation in
_allocate_bindings
might be causing issues. Solution: Consider using a fixed-size buffer pool instead of dynamic allocation.
Additional Recommendations:
- Ensure you’re using the latest version of the Hailo SDK.
- Implement more robust error handling and logging to pinpoint issues.
- Replace the
input()
wait with a timed approach usingtime.sleep()
to avoid potential input-related issues.
If you implement these changes and still face problems, please share your updated code along with any new error messages or logs. Also, providing information about your Hailo SDK version and system specifications would be helpful for further troubleshooting.