I thought it was for Bozo
I thought it was for Bozo
Who knows. The only thing that came to my mind reading that is the joke “statement made by utterly deranged”. And then I realized there is no joke.
Yeah, neural network training is notoriously easy to reproduce /s.
Just few things can affect results: source data, data labels, network structure, training parameters, version of training script, versions of libraries, seed for random number generator, hardware, operating system.
Also, deployment is another can of worms.
Also, even if you have open source script, data and labels, there’s no guarantee you’ll have useful documentation for either of these.
That article gave me a whiplash. First part: pretty cool. Second part: deeply questionable.
For example these two paragraphs from sections ‘problem with code’ and ‘magic of data’:
Well, “just read the dataset bro” sound great sounds great until you are staring at a dataset with 100 000 examples and someone is asking you to interpret it.