<Go back

Showroom methods for machine learning

Machine Learning (ML) software is a software system with one or more components that learn from data. This entails engineering a pipeline for the collection and pre-processing of data, the training of an ML model, the deployment of the trained model to perform inference and the software engineering of the encompassing software system that sends new input data to the model to get answers.

For Machine Learning projects, the existing Showroom methods can be applied as follows:

  • Benchmark test “Benchmark tests are regularly used to test pattern recognition software. If a standard set of data is recognised with the software, the results can be compared to that of other software.” (quote from Benchmark test) An example of this is the MNIST dataset with handwritten digits that also shows test error rates for many different methods.
  • Ethical check Ethical checks are even more important for ML solutions because you do not design the rules yourself, but let the system learn the rules from data. If the data is biased, this can result in unintended behaviour. You need a way to insure that the behaviour of the system stays within ethical boundaries. You must also make sure that the system is not learning something that invades people’s privacy. The Technology Impact Cycle Tool provides a practical translation of several ethical considerations into a questionnaire and might help for your project as well.
  • Guideline conformity analysis The guidelines that apply to software systems of course also apply to ML software systems. Since 2017 there is a group at ISO (ISO/IEC JTC 1/SC 42) that works on standardization in the area of Artificial Intelligence.
  • Peer review Having your work reviewed by peers is always a good idea. ML code is difficult to review because most ML algorithms are black boxes. It might be more valuable to organize a walkthrough of your ML experiments where you talk your peers through your way of thinking in designing the ML model. How did you come up with the final algorithm and its hyper parameters?
  • Pitch It is a good idea to also think about the business proposition or opportunity you see for your ML solution. Knowing the added value of your solution helps you make decisions during your engineering process.
  • Product review Because of the experimentation required for engineering ML models, you will probably be working in an iterative or agile way. It is good practice to deliver a working model/product at the end of each iteration and have it reviewed by the client or users. If it is a model you deliver, you need to think of a good way for others to review or demonstrate it.
  • Static program analysis You should also run code quality checkers on your ML software. However, since the most important code (ML library calls) behaves as a black box, you need additional quality checks as well.