imblearn.metrics.macro_averaged_mean_absolute_error() raises an exception when not all classes are represented in the ground truth. Thought I'd flag this because a similar function in sklearn behaves differently to imbalanced-learn. That function is f1_score with macro averaging. Here's an exam...