(Replying to PARENT post)
http://evolvingstuff.blogspot.com/2011/02/animated-fractal-f...
These are related to recurrent neural networks evolved to maximize fitness whilst wandering through a randomly generated maze and picking up food pellets (the advantage being to remember not to revisit where you have already been.)
(Replying to PARENT post)
That's the researchers who prefer these solutions, not the networks. And that's how the networks find them: because the experimenters have access to the test data and they keep tuning their networks' parameterers until they perfectly fit not only the training, but also the _test_ data.
In that sense the testing data is not "unseen". The neural net doesn't "see" it during training but the researchers do and they can try to improve the network's performance on it, because they control everything about how the network is trained, when it stops training etc etc.
It's nothing to do with loss functions and the answers are not in the maths. It's good, old researcher bias and it has to be controlled by othear means, namely, rigorous design _and description_ of experiments.
(Replying to PARENT post)
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
Lei Wu, Zhanxing Zhu, Weinan E
https://arxiv.org/abs/1706.10239
I think it was the first paper to study the volume of the basins of attraction of good global minima and used the poisoning scheme to highlight the frequency of bad global minima that are typically not reachable found via SGD on the original dataset without poisoning.
(Replying to PARENT post)
In small (two or three) dimensions, there are ways of visualizing overtraining/regularization/generalization with scatter plots (maybe coloured with output label) of activations in each layer. Training will form tighter "modes" in the activations, and the "low density" space between modes constitutes "undefined input space" to subsequent layers. Overtraining is when real data falls in these "dead" regions. The aim of regularization is to shape the activation distributions such that unseen data falls somewhere with non-zero density.
Training loss does not give any information on generalization here unless it shows you're in a narrow "well". The loss landscapes are high-dimensional and non-obvious to reason about, even in tiny examples.
(Replying to PARENT post)
"Why might SGD prefer basins that are flatter?" It's because they look at the derivative. When the bottom of the valley is flat they don't have enough momentum to get out.
I have observed the lottery ticket hypothesis.
(Replying to PARENT post)
This means that, at the very least, there are many global optima (well, unless all permutable weights end up with the same value, which is obviously not the case). The fact that different initializations/early training steps can end up in different but equivalent optima follows directly from this symmetry. But whether all their basins are connected, or whether there are just multiple equivalent basins, is much less clear. The "non-linear" connection stuff does seem to imply that they are all in some (high-dimensional, non-linear) valley.
To be clear, this is just me looking at these results from the "permutation" perspective above, because it leads to a few obvious conclusions. But I am not qualified to judge which of these results are more or less profound.