Viewing a single comment thread. View all comments

Featureless_Bug t1_j0guhqs wrote

This is a joke and not a paper, tbh. "Therefore, for continuous activations, the neural network equivalent tree immediately becomes infinite width even for a single filter," - the person who wrote this has no idea what infinity actually means, and that a decision tree with infinite width is by definition not a decision tree anymore. And they try to sell it as something that would increase explainability of neural networks, just wow. Is there a way to request removal of a "paper" from arxiv?

1