Choosing the Sample with Lowest Loss makes SGD Robust
The presence of outliers can potentially significantly skew the parameters of machine learning models trained via stochastic gradient descent (SGD). In this paper we propose a simple variant of the simple SGD method: in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample. Vanilla SGD corresponds to k = 1, i.e. no choice; k >= 2 represents a new algorithm that is however effectively minimizing a non-convex surrogate loss. Our main contribution is a theoretical analysis of the robustness properties of this idea for ML problems which are sums of convex losses; these are backed up with linear regression and small-scale neural network experiments
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Vatsal Shah (edit)
Xiaoxia Wu (add twitter)
Sujay Sanghavi (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
01/12/20 06:04PM
11,583
2,441
Tweets
mlwcommunity: Interesting idea) https://t.co/TfDbN3RF7w https://t.co/TfDbN3RF7w
evolvingstuff: What an elegant idea: Choosing the Sample with Lowest Loss makes SGD Robust "in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample" https://t.co/mwZjnhJy92
arxivml: "Choosing the Sample with Lowest Loss makes SGD Robust", Vatsal Shah, Xiaoxia Wu, Sujay Sanghavi https://t.co/KZEi4dTRkm
arxiv_cs_LG: Choosing the Sample with Lowest Loss makes SGD Robust. Vatsal Shah, Xiaoxia Wu, and Sujay Sanghavi https://t.co/kuD97xBdsh
StatsPapers: Choosing the Sample with Lowest Loss makes SGD Robust. https://t.co/9ZEmC1go5u
BrundageBot: Choosing the Sample with Lowest Loss makes SGD Robust. Vatsal Shah, Xiaoxia Wu, and Sujay Sanghavi https://t.co/FhZLy5Tikz
Images
Related