A basic and natural way to interpretability is to provide explanations of an ML model’s predictions in the form of input features [14]. This is the reason most of the work that tried to explain the predictions of black-box models used in some sense the features that have some influence ...