Submitted by Fender6969 t3_11ujf7d in MachineLearning
In the context of ML Pipelines (ETL, model training, model deployment and model serving scripts), are there any best practices on what test coverage to implement on these code artifacts?
Submitted by Fender6969 t3_11ujf7d in MachineLearning
In the context of ML Pipelines (ETL, model training, model deployment and model serving scripts), are there any best practices on what test coverage to implement on these code artifacts?
I treat code artifacts of ML pipelines like any other software. I aim for 100% test coverage. Probably a bit controversial, but I always keep a small amount of example data in the repo for unit and integration tests. Could also be downloaded from blob in the CI pipeline, but repo size is usually not the limiting factor.
Having a schema and generating random or synthetic data based on that schema is my way to go for testing.
more or less the same. However, the simplest way to start, at least that's what I found, is to randomize a sub sample of real data. It may be the case that synthetic data is simply too simple / does not capture the real distribution and can hide bugs.
Probably both is the ideal solution.
Also depends on privacy constraints, sometimes you can't persist the data.
Many of the clients I support have rather sensitive data and persisting this into a repo would be a security risk. I suppose creating synthetic data would be the next best alternative.
I'm glad I'm not the only one
Is there a reason why you feel there is need for such rigour? 100% is quite an overkill even for the typical software projects IMO.
You probably end up having to write tests for even simple one liner functions which gets exhausting.
Should have been more precise. 100% of what goes into any pipeline or the deployment gets tested. We deploy many models on the edge in manufacturing. If the model fails, the production line might stand still. Can't risk that.
Ah, that makes sense.
Any specific part you're wondering about? General advice applies here: test each unit of your software, and then integrate the units and test them that way. For each unit, hardcode the input and then test that the output is what you expect. For unit tests, make them as simple as possible while still testing as much of the functionality as possible. For integration tests, make a variety of them ranging from just a couple combined units & simple input/output to end-to-end tests that simulate the real world as closely as possible.
This is all advice that's not specific to ML in any way. Anything more specific depends on so many factors that boil down to:
For example: Will your dataset change? Will it change just a little (MNIST to Fashion-MNIST) or a lot (MNIST to CIFAR)? Will your model change? Will it just be a new training run of the same model? Will the model architecture stay the same or will it change internally? Will the input or output format of the model change? How often will any of these changes happen? Which parts of the pipeline are manual, and which are automatic? For each part of the system, what are the consequences of it failing (does it merely block further development, or will you get angry calls from your clients)?
Edit: I think the best advice I can give is to test everything that can possibly be tested, but prioritize based on risk impact (chance_of_failure * consequences_of_failure).
Thanks for the response. I think hardcoding things might make the most sense. Ignoring testing the actual data for a minute, let us say I have an ML pipeline with the following units:
Does the above unit tests make sense to add in a CI pipeline?
Most of that makes sense. The only thing I would be concerned about is the model training test. Firstly, a unit test should test the smallest possible unit. You should have many unit tests to test your model, and you should focus on those tests being as simple as possible. Nearly every function you write should have its own unit test, and no unit test should test more than one function. Secondly, there is an important difference between verification and validation testing. Verification testing shouldn't test for any particular accuracy threshold or anything like that, it should at most verify things like "model.fit() causes the model to change" or "a linear regression model that is all zeroes produces an output of zero." Verification testing is what you put on your CI pipeline to sanity check your code before it gets merged to master. Validation testing, however, should test model accuracy. It should go on your CD pipeline, and should validate that the model you're trying to push to production isn't low quality.
This makes perfect sense thank you. I’m going to think through this further. If you have any suggestions for verification/sanity testing for any of the components listed above please let me know.
I've been trying to figure it out myself and I'm very curious to see other responses.
For ETL, write unit tests to handle some input edge cases. E.g Null values, mis-formatting, values out of range as well as some simple working cases.
For model training, the test focus is on having "valid" hyperparams and configurations. I write test cases to try to overfit on a small training set. i.e Confirm the model learns. There are also some robustness tests that I sometimes run post training, but those are very specific to certain NLP tasks, applications.
For model serving, successful parsing of the request and subsequent feature transformation (if any), very similar to ETL.
bash a big ole data set through as an integration test and call it a done job. in my experience, DS moves too fast for testing to be as effective as for SWEs (no matter how carefully I've written my tests, they've never lasted more than 12 months before becoming a nuisance that people started ignoring).
Yeah I am always uncomfortable pushing untested code to Production. I think I have some good ideas for what to add to my CI pipeline regarding unit tests.
asraniel t1_jcoj075 wrote
i would love to know more about this. some of my ideas are super simple dataset that might very well overfit, but show that the code works. other than that, i would love to hear more about simple tests which do not need to run the full pipeline on the full dataset (which does not tell you that much ultimately)