The interest in interpretable models that are not only accurate but also understandable is rapidly increasing; often resulting in the machine-learning community turning to decision tree classifiers. Many techniques of growing decision trees use oblique rules to increase the accuracy of the tree and decrease its overall size, but this severely limits understandability by a human user. We propose a new type of oblique rule for decision tree classifiers that is interpretable to human users. We use the parallel coordinates system of visualisation to display both the dataset and rule to the user in an intuitive way. We propose the use of an evolutionary algorithm to learn this new type of rule and show that it produced significantly smaller trees compared to a tree created with axis-parallel rules with minimal loss in accuracy.
展开▼