Submitted by RAFisherman t3_114d166 in MachineLearning
I've been studying about ARIMAX, XGBoost, MLForecast and Prophet. As a newcomer to any method, I like first to do an exhaustive comparison of tools trying to understand where they succeed/fail. After exploring ARIMA/XGBoost, I came across MLForecast/Prophet. But I'm left with the following questions:
- Why is MLForecast better than out-of-the-box XGboost? Sure, it does feature engineering and it appears to do dynamic predictions on your lagged features, but is that it? Does it do hyperparameter tuning? Does it have seasonal trends like Prophet does?
- I see that you can use exogenous features in Prophet, but how does this scale? Let's assume I have 50 predictors. How does prophet handle these? I found this in the docsand this other person's post explaining how to do it, but largely I've come away with the impression that it's pretty hard to do this vs. just doing it with XGBoost.
- Does ARIMAX compare anymore? Are there any papers comparing out-of-sample predictions with ARIMAX vs. XGBoost vs. Prophet vs. Fable? Does it just depend on your dataset and I should try all four?
I have a time series data with dozens of "known" inputs (such as ad spend) and a lot of external data (CPI, economic health, stocks, etc.). My goal is to use my model to optimize my target by "plugging in" ad spend and dynamically forecasting the economic data.
pyfreak182 t1_j8vpx4e wrote
In case you are not familiar, there is also Time2Vec embeddings for Transformers. It would be interesting to see how that architecture compares as well.