Hi there,
I’ve successfully converted a model i’ve successfully compiled using when setting thecompiler_optimization_level to max.
Here is the utilisation table:
[info] ±----------±--------------------±--------------------±-------------------+
[info] | Cluster | Control Utilization | Compute Utilization | Memory Utilization |
[info] ±----------±--------------------±--------------------±-------------------+
[info] | cluster_0 | 100% | 60.9% | 35.9% |
[info] | cluster_1 | 100% | 68.8% | 65.6% |
[info] | cluster_2 | 87.5% | 100% | 61.7% |
[info] | cluster_3 | 100% | 87.5% | 97.7% |
[info] | cluster_4 | 81.3% | 92.2% | 96.9% |
[info] | cluster_5 | 93.8% | 98.4% | 93.8% |
[info] | cluster_6 | 100% | 60.9% | 78.1% |
[info] | cluster_7 | 81.3% | 84.4% | 50% |
[info] ±----------±--------------------±--------------------±-------------------+
[info] | Total | 93% | 81.6% | 72.5% |
[info] ±----------±--------------------±--------------------±-------------------+
This process took many many hours. How would I configure the model script to ensure the same compilation, without needing to exhaustively search for optimal params.
Could I use commands like these:
performance_param(compiler_optimization_level=1)
resources_param(strategy=greedy, max_control_utilization=1.0, max_compute_utilization=1.0, max_memory_utilization=0.97)
context_switch_param(mode=disabled)
If so, based on the above utilisation table, what should I set the utilisation thresholds as? Is the utilisation threshold the cluster utilisation or the total utilisation for each type of utilisation?
Thanks