### Top 10 Arxiv Papers Today

##### #1. Gradient Descent Finds Global Minima of Deep Neural Networks
###### Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, Xiyu Zhai
Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. Our bounds also shed light on the advantage of using ResNet over the fully connected feedforward architecture; our bound requires the number of neurons per layer scaling exponentially with depth for feedforward networks whereas for ResNet the bound only requires the number of neurons per layer scaling polynomially with depth. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result.
more | pdf | html
None.
###### Tweets
newsyc20: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/Fmcc3SDq7G (https://t.co/IGaOH0eR1D)
HNTweets: Gradient Descent Finds Global Minima of Deep Neural Networks: https://t.co/Lau0L7dys7 Comments: https://t.co/gJcjJldJ3i
deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
newsyc100: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/qi3pM5oab3 (https://t.co/ooQxRaAHap)
newsyc50: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/wk9NQBSFm2 (https://t.co/l2OxNauBeG)
roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
WAWilsonIV: Some suitable generalization of this paper could shed light on a question I sometimes think about: "why do neural networks even work at all?" https://t.co/owrB5Fnv0h
BrundageBot: Gradient Descent Finds Global Minima of Deep Neural Networks. Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai https://t.co/YcCc6xgN19
LukeSpear: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/EYvZQqcnFk #hn #startups #privacy
Synced_Global: "Gradient Descent Finds Global Minima of Deep Neural Networks" by researchers from @CarnegieMellon, @PKU1898, @USC and @MITEECS Read the full paper at https://t.co/huMYfqr1ys https://t.co/2jJzQS6qxi
loretoparisi: #GradientDescend finds a global minimum in training deep neural networks despite the objective function being non-convex proving gradient achieves zero training loss in polynomial time #DeepLearning https://t.co/soJnJwi953
harisamin: WTF is this even possible https://t.co/CdahPoOu28 ? I haven't read the paper yet, wondering what your thoughts are @spsaaibi
KloudStrife: 'Gradient Descent finds global minima of deep NNs'. Applies to resnets, in the convolutional case, no unrealistic assumptions. Important paper if confirmed. https://t.co/6soCfAPnJU
hn_frontpage: Gradient Descent Finds Global Minima of Deep Neural Networks L: https://t.co/dfFLtmVs2B C: https://t.co/RVx6tBYXCN
abhatt2: Du et al. (https://t.co/Uuygm49DYr) and Allen-Zhu et al. (https://t.co/EAnU5M7Bqg) independently seem to have solved a basic theory problem in modern ML: efficient convergence of over-parameterized deep neural nets. Very clean theorems, not many assumptions. Pretty impressive!
tisamit: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/OAsXOKgFEj #datascience #analytics
InnoveoPartners: Interesting by AI ? You have to read this "Gradient Descent Finds Global Minima of Deep Neural Networks" https://t.co/5oCVb1WCan https://t.co/lCWCQcYW69
_bha1: https://t.co/s8Ojo0Awc9 Gradient Descent Finds Global Minima of Deep Neural Networks. https://t.co/13ByZqrRoX
gmwagner: I've found ResNets and ConNN have similar performance on the same data, after reading this paper I might need to investigate that more. https://t.co/B5zuGAQzqj
arxivml: "Gradient Descent Finds Global Minima of Deep Neural Networks", Simon S． Du, Jason D． Lee, Haochuan Li, Liwei Wang,… https://t.co/231Dzu6qd6
reddit_ml: [1811.03804] Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/NlFOuwmWJj
nmfeeds: [AI] https://t.co/7qt13PxoCQ Gradient Descent Finds Global Minima of Deep Neural Networks. Gradient descent finds a global...
nmfeeds: [CV] https://t.co/7qt13PxoCQ Gradient Descent Finds Global Minima of Deep Neural Networks. Gradient descent finds a global...
nmfeeds: [O] https://t.co/7qt13PxoCQ Gradient Descent Finds Global Minima of Deep Neural Networks. Gradient descent finds a global ...
hereticreader: [1811.03804] Gradient Descent Finds Global Minima of Deep Neural Networks - https://t.co/9D1COPFWPY https://t.co/XPxp4cTzkN
DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
SciFi: Gradient Descent Finds Global Minima of Deep Neural Networks. https://t.co/MjmXyUF2cq
angsuman: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/3OZMkmEbkC
betterhn50: 55 – Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/jX9DHfr6Jm
DavidpichKh: "Gradient Descent Finds Global Minima of Deep Neural Networks" Du et al.: https://t.co/smfmtzMrrT #DeepLearning https://t.co/6qO2A0dPqr
arxiv_cscv: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/D6u9mSXJwL
arxiv_cscv: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/D6u9mSG88b
doctorSturza: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/VFizAyqXV8
hackernewsfeed: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/kmqjVk6zRg
yuxili99: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/VlPF18nUjK
hackernews100: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/EqvbM5Dkjp
hackernewsrobot: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/N8adVA0lUk
betterhn100: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/5zgLBUv7DO
hardmaru: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
rasbt: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
hugo_larochelle: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
rabkhan25: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/znqOuUM4kp
KyleCranmer: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
botnet_hunter: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
cosnet_bifi: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
ParisMLgroup: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
PaaSDev: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
tyrell_turing: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
PRONOjits: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
chrshmmmr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
hmCuesta: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
RaZ0R3: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
jaialkdanel: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
tjmlab: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
__DaLong: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
aasensior: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
nsdual: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MaxALittle: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
terashimahiroki: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
GonzaloBarria: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
ialuronico: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
stats385: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
emilioleton: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
AmineKorchiMD: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
SeguiSanti: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
xiangrenUSC: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
DrZeeshanZia: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
sp4ghet: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
datanerdword: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
harshkn: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
adelong: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
PatrickOmid: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
gregvidy: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Klevis_Ramo: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
serrjoa: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
JavierBurroni: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
arm_gilles: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
daubman: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
onsen_zuki: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Corbera_Sergio: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
fmailhot: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
thepersonwithin: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
manuelbaltieri: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
abojchevski: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
WeisiG: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
ukhndlwl: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
tkchaki: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
mattpetersen_ai: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
arora_manuel: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
RemiCadene: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
one_twit_wonder: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
stats285: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
honasu: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
kjgeras: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
letranger14: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
evansdianga: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
karthiknrao: RT @reddit_ml: [1811.03804] Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/NlFOuwmWJj
ernire: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Tanaygahlot: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
HaithamKhedr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
arthpajot: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
billderose: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MattScicluna: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
random_agent: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
sahilsingla47: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MatteoMasperoNL: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
1sdom: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
alx_eco: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
95Rohan: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MarcoZorzi: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
necoleman: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
AssistedEvolve: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
l_kiraly: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MohanraamS: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
aminedotin: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
mizvladimir: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
jlhrzn: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
RndWalk: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
siraferradans: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
quidpr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
michal_sustr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Jsevillamol: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MSerhanCan: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
namhoonlee09: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
nikos3388: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
marsusensei: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
abhijithtn: RT @deeplearning4j: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/ABH4vz2GnZ #deeplearning #machinelearning
GaryTheGammarid: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
i_shikhar98: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Saeed_KH_: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
feiwang03: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
_reachsumit: RT @hn_frontpage: Gradient Descent Finds Global Minima of Deep Neural Networks L: https://t.co/dfFLtmVs2B C: https://t.co/RVx6tBYXCN
coreyamyers: RT @DataSciFact: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/9VbSMulFLw
gevero: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
allahwala08: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
CLagares7: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
ociule: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
mnrmja007: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
adn_twitts: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
PeterMitrano: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
osamaadelshokry: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
tttorrr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
WeikangGong: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
hai_t_pham: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MozejkoMarcin: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
kli_nlpr: RT @yuxili99: Gradient Descent Finds Global Minima of Deep Neural Networks https://t.co/VlPF18nUjK
psmrustham: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Mazen_Ezzeddine: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
mattbeach42: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
jeandut14000: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
yumakajihara: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
gandhikanishk: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
mzadrogaPL: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
cezzo_sw: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Epsilon_Lee: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
vishnu_lsvsr: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
vi_shall_c: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Tsingggg: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
thinkmachine1: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
do_dreamo: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
yarphs: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
nitishjoshi23: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
pcp_liu: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
shfaithy: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
Anki98765: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
MassBassLol: RT @roydanroy: Any optimization experts out there willing to weigh in? https://t.co/tG1JsqGLOg https://t.co/Kx4lQ2bIaL
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 5
Total Words: 14525
Unqiue Words: 2449

##### #2. A General Method for Amortizing Variational Filtering
###### Joseph Marino, Milan Cvitkovic, Yisong Yue
We introduce the variational filtering EM algorithm, a simple, general-purpose method for performing variational inference in dynamical latent variable models using information from only past and present variables, i.e. filtering. The algorithm is derived from the variational objective in the filtering setting and consists of an optimization procedure at each time step. By performing each inference optimization procedure with an iterative amortized inference model, we obtain a computationally efficient implementation of the algorithm, which we call amortized variational filtering. We present experiments demonstrating that this general-purpose method improves performance across several deep dynamical latent variable models.
more | pdf | html
###### Tweets
BrundageBot: A General Method for Amortizing Variational Filtering. Joseph Marino, Milan Cvitkovic, and Yisong Yue https://t.co/ZSknyhPjJW
Memoirs: A General Method for Amortizing Variational Filtering. https://t.co/V8T5zidWDf
###### Github

PyTorch implementation of AVF

Repository: amortized-variational-filtering
User: joelouismarino
Language: Python
Stargazers: 8
Subscribers: 4
Forks: 1
Open Issues: 0
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 6968
Unqiue Words: 2126

##### #3. Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension
###### András Gilyén, Seth Lloyd, Ewin Tang
We construct an efficient classical analogue of the quantum matrix inversion algorithm (HHL) for low-rank matrices. Inspired by recent work of Tang, assuming length-square sampling access to input data, we implement the pseudoinverse of a low-rank matrix and sample from the solution to the problem $Ax=b$ using fast sampling techniques. We implement the pseudo-inverse by finding an approximate singular value decomposition of $A$ via subsampling, then inverting the singular values. In principle, the approach can also be used to apply any desired "smooth" function to the singular values. Since many quantum algorithms can be expressed as a singular value transformation problem, our result suggests that more low-rank quantum algorithms can be effectively "dequantised" into classical length-square sampling algorithms.
more | pdf | html
None.
###### Tweets
lukOlejnik: Interesting result. A specific algorithm for a quantum computer ("low-rank stochastic regression") found to be efficiently solved by non-quantum computers. Not yet understood well which problems are best solved with quantum computers. https://t.co/ugGX81ga6m https://t.co/pr8CAPK6uy
QuantumMemeing: Fig. 1.26: A meme [1]. [1] Quantum Computing Memes for QMA-Complete Teens, Studies in Ancient Greek and Syriac Memeing (2018). Ewin Tang destroying the hopes and dreams of QMLers everywhere. Check out the paper here: https://t.co/jNo9XaPsRI https://t.co/oQwzeVkLyS
yuyu_hf: また量子情報◯すマンの新作だ、、、 https://t.co/Ebas1imqAV
fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
ComputerPapers: Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension. https://t.co/jDZCT9f2us
ewintang: New twin papers on quantum-inspired algorithms for low-rank matrix inversion: https://t.co/3T6Xx9G3hy and https://t.co/tUPCxbq0UA
octonion: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
makoto0218ne56: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
spidermanzano: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
Lukasaoz: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
nick_farina: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
postquantum: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
katelovesneuro: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
RichFelker: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
durumcrustulum: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
rdviii: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
gejikeiji: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
rbtcollins: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
iKodack: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
kamakiri_ys: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
jpdowling: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
MGimenoSegovia: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
aquintex: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
henryquantum: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
BulentKIzILtan: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
amy8492: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
matt_reagor: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
rayohauno: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
cocori_aqua: RT @yuyu_hf: また量子情報◯すマンの新作だ、、、 https://t.co/Ebas1imqAV
cocori_aqua: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
ilyaraz2: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
TilmaLabs: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
yoshi_and_aki: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
yoshi_and_aki: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
quantumbtc: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
AlyTarekIbrahim: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
ywyamashiro: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
wjmzbmr1: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
world_fantasia: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
akinori_kawachi: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
gshartnett: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
Deepneuron: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
DhammaKimpara: RT @QuantumHazzard: Wow -- the famous HHL algorithm doesn't actually need a quantum computer! https://t.co/5ZduTl0vcZ Another quantum algor…
ons_yy: RT @fgksk: うぉ。第三弾か。https://t.co/oMrsU9OB02
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 5438
Unqiue Words: 1428

##### #4. The hidden giant: discovery of an enormous Galactic dwarf satellite in Gaia DR2
###### G. Torrealba, V. Belokurov, S. E. Koposov, T. S. Li, M. G. Walker, J. L. Sanders, A. Geringer-Sameth, D. B. Zucker, K. Kuehn, N. W. Evans, W. Dehnen
We report the discovery of a Milky-Way satellite in the constellation of Antlia. The Antlia 2 dwarf galaxy is located behind the Galactic disc at a latitude of $b\sim 11^{\circ}$ and spans 1.26 degrees, which corresponds to $\sim2.9$ kpc at its distance of 130 kpc. While similar in extent to the Large Magellanic Cloud, Antlia~2 is orders of magnitude fainter with $M_V=-8.5$ mag, making it by far the lowest surface brightness system known (at $32.3$ mag/arcsec$^2$), $\sim100$ times more diffuse than the so-called ultra diffuse galaxies. The satellite was identified using a combination of astrometry, photometry and variability data from Gaia Data Release 2, and its nature confirmed with deep archival DECam imaging, which revealed a conspicuous BHB signal in agreement with distance obtained from Gaia RR Lyrae. We have also obtained follow-up spectroscopy using AAOmega on the AAT to measure the dwarf's systemic velocity, $290.9\pm0.5$km/s, its velocity dispersion, $5.7\pm1.1$ km/s, and mean metallicity, [Fe/H]$=-1.4$. From these...
more | pdf | html
###### Tweets
cosmos4u: Astronomy really needed something like that ... an "enormous dwarf galaxy" - Antlia 2 is similar in extent to the Large Magellanic Cloud but orders of magnitude fainter: https://t.co/YfL73W8UZc
mmoyr: Just discovered: An enormous galaxy (about twice as big as the moon appears) orbiting the Milky Way. But it's incredibly faint. https://t.co/ftcdw0GJVe
kevaba: Astronomers discover an enormous but dim dwarf galaxy orbiting our Milky Way Galaxy, using Gaia data: "The origin of this core may be consistent with aggressive feedback, or may even require alternatives to cold dark matter" https://t.co/J0LuPIKVWd
MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing discoveries from Torrealba et al. of Draco-mass galaxies with sizes that are a factor of 5-10 larger, hidden in the Milky Way! https://t.co/hcWhM2DgTw
8minutesold: A new satellite galaxy of the Milky Way has been discovered using Gaia DR2 data: Antlia 2. It is pretty weird: the lowest-surface brightness satellite known, and apparently living in one of the lowest-density DM halos, too. https://t.co/ZpiL2jOfDl What about VPOS? MOND? Thread👇 https://t.co/GcQMDUGurg
Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report the discovery [using a combination of astrometry, photometry and variability] of a Milky-Way satellite in the constellation of Antlia [at a distance of 130 kpc]" https://t.co/O2YYBJGJNa
conselice: Interesting result - Gaia has found a dwarf galaxy which is 1.26 deg on the sky - over twice as big as the moon appears. However you'd never see it by eye, or even with deep imaging, given that this has a surface brightness of ~32. https://t.co/mMjx3MaVwi
AstroRoque: A new satellite of the Milky Way discovered with @ESAGaia, Antlia 2 dwarf galaxy, sets a new limit for the dimmest and most diffuse system known. https://t.co/T2mLbBrNQh Inferred orbit of Antlia 2, (Figure 8) https://t.co/gbU9W2ruvv
neuronomer: Brilliant paper by Torrealba et al. on arXiv today. Discovery of another satellite of the Milky Way! Is it controversial to think that the most convincing of these three plots is the proper-motion one? https://t.co/dURVpD3bIG https://t.co/NjcIe258aU
AsteroidEnergy: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
physicsmatt: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
CelineBoehm1: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
kevaba: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco…
fergleiser: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
nfmartin1980: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco…
Stoner_68: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
HansPrein: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
GaiaUB: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
marianojavierd1: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
galaxy_map: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
astroarianna: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco…
xurde69: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
sergiosanz001: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco…
yshalf: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
FernRoyal: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
CosmoCa3sar: RT @MBKplus: First there was the "feeble giant" (https://t.co/W4q9O7rTII). Now, the "hidden giant" (https://t.co/tUOmCNzQrV). Amazing disco…
DivakaraMayya: RT @AstroPHYPapers: The hidden giant: discovery of an enormous Galactic dwarf satellite in Gaia DR2. https://t.co/TW2SBSbhUn
real_vrocha: RT @Jos_de_Bruijne: "The hidden giant: discovery of an enormous Galactic dwarf satellite in #GaiaDR2" https://t.co/OPKhM2PToj "We report th…
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 11
Total Words: 18930
Unqiue Words: 5712

##### #5. Embracing the Laws of Physics: Three Reversible Models of Computation
###### Jacques Carette, Roshan P. James, Amr Sabry
Our main models of computation (the Turing Machine and the RAM) make fundamental assumptions about which primitive operations are realizable. The consensus is that these include logical operations like conjunction, disjunction and negation, as well as reading and writing to memory locations. This perspective conforms to a macro-level view of physics and indeed these operations are realizable using macro-level devices involving thousands of electrons. This point of view is however incompatible with quantum mechanics, or even elementary thermodynamics, as both imply that information is a conserved quantity of physical processes, and hence of primitive computational operations. Our aim is to re-develop foundational computational models that embraces the principle of conservation of information. We first define what conservation of information means in a computational setting. We emphasize that computations must be reversible transformations on data. One can think of data as modeled using topological spaces and programs as modeled...
more | pdf | html
###### Tweets
HNTweets: Embracing the Laws of Physics: Three Reversible Models of Computation: https://t.co/92Law8R96K Comments: https://t.co/aYLFpwEAS1
arxiv_org: Embracing the Laws of Physics: Three Reversible Models of Computation. https://t.co/VzRixLj4Bq https://t.co/RQqHnjM624
Aldana_Angel: https://t.co/hPIJbhsNIb Embracing the Laws of Physics: Three Reversible Models of Computation Jacques Carette, Roshan P. James, Amr Sabry
sigfpe: One for the to-read list: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/KgRMHc4fri
StephenPiment: Embracing the Laws of Physics: Three Reversible Models of Computation (using Curry-Howard) https://t.co/Vh8Tz6W424
jmsunico: sn-news: #sw #dev #maths Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/ElxzDRwG2C
hn_frontpage: Embracing the Laws of Physics: Three Reversible Models of Computation L: https://t.co/OJVUK7CPhd C: https://t.co/z6Hu4842H4
kov4l3nko: Hmmm... just in time:)[1811.03678] Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/OM1bq2b06X
andrewfnewman: "Programs as Reversible Deformations" https://t.co/AHTE3QGsBE something to get into if you're not dealing with JavaScript.
kushnerbomb: gud paper, much more type theory than I expected https://t.co/VxCs4zEmY0
angsuman: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/mMLEyIFsUZ
betterhn50: 51 – Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/JsPmBCjHtD
QuantumPapers: Embracing the Laws of Physics: Three Reversible Models of Computation. https://t.co/NNq442eSaD
joshtronic: Embracing the Laws of Physics: Three Reversible Models of Computation - https://t.co/IF5rLlIQdu
dJdU: “Embracing the Laws of Physics: Three Reversible Models of Computation” https://t.co/AkPI6aRpga
hackernewsrobot: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/Atno35U9Mi
jackhidary: RT @StephenPiment: Embracing the Laws of Physics: Three Reversible Models of Computation (using Curry-Howard) https://t.co/Vh8Tz6W424
Juan_A_Lleo: RT @StephenPiment: Embracing the Laws of Physics: Three Reversible Models of Computation (using Curry-Howard) https://t.co/Vh8Tz6W424
paul_snively: RT @sigfpe: One for the to-read list: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/KgRMHc4fri
PLT_cheater: RT @sigfpe: One for the to-read list: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/KgRMHc4fri
Priceeqn: RT @StephenPiment: Embracing the Laws of Physics: Three Reversible Models of Computation (using Curry-Howard) https://t.co/Vh8Tz6W424
jjcarett2: RT @arxiv_cslo: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/nGVvjPyTqm
maxsnew: RT @arxiv_cslo: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/nGVvjPyTqm
pauldhoward: RT @StephenPiment: Embracing the Laws of Physics: Three Reversible Models of Computation (using Curry-Howard) https://t.co/Vh8Tz6W424
SandMouth: RT @arxiv_cslo: Embracing the Laws of Physics: Three Reversible Models of Computation https://t.co/nGVvjPyTqm
shubh_300595: RT @arxiv_org: Embracing the Laws of Physics: Three Reversible Models of Computation. https://t.co/VzRixLj4Bq https://t.co/RQqHnjM624
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 16963
Unqiue Words: 4120

##### #6. Intrinsic Differentiability and Intrinsic Regular Surfaces in Carnot Groups
###### Daniela Di Donato
A Carnot group G is a connected, simply connected, nilpotent Lie group with stratified Lie algebra. Intrinsic regular surfaces in Carnot groups play the same role as C^1 surfaces in Euclidean spaces. As in Euclidean spaces, intrinsic regular surfaces can be locally defined in different ways: e.g. as non critical level sets or as continuously intrinsic differentiable graphs. The equivalence of these natural definitions is the problem that we are studying. Precisely our aim is to generalize some results proved by Ambrosio, Serra Cassano, Vittone valid in Heisenberg groups to the more general setting of Carnot groups.
more | pdf | html
None.
###### Tweets
mathDGb: Daniela Di Donato : Intrinsic Differentiability and Intrinsic Regular Surfaces in Carnot Groups https://t.co/uz8KyNr8TX https://t.co/HRjKZaElAP
mathMGb: Daniela Di Donato : Intrinsic Differentiability and Intrinsic Regular Surfaces in Carnot Groups https://t.co/BSfntNNIOo https://t.co/jXozTqXzqG
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 1
Total Words: 17734
Unqiue Words: 2500

##### #7. Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction
###### Yannick Suter, Alain Jungo, Michael Rebsamen, Urspeter Knecht, Evelyn Herrmann, Roland Wiest, Mauricio Reyes
Deep learning for regression tasks on medical imaging data has shown promising results. However, compared to other approaches, their power is strongly linked to the dataset size. In this study, we evaluate 3D-convolutional neural networks (CNNs) and classical regression methods with hand-crafted features for survival time regression of patients with high grade brain tumors. The tested CNNs for regression showed promising but unstable results. The best performing deep learning approach reached an accuracy of 51.5% on held-out samples of the training set. All tested deep learning experiments were outperformed by a Support Vector Classifier (SVC) using 30 radiomic features. The investigated features included intensity, shape, location and deep features. The submitted method to the BraTS 2018 survival prediction challenge is an ensemble of SVCs, which reached a cross-validated accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set, and 42.9% on the testing set. The results suggest that more training data...
more | pdf | html
###### Tweets
BrundageBot: Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction. Yannick Suter, Alain Jungo, Michael Rebsamen, Urspeter Knecht, Evelyn Herrmann, Roland Wiest, and Mauricio Reyes https://t.co/VEQmuPViqQ
arxivml: "Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction", Yannick Suter, Alain Jungo… https://t.co/wzHvywfqp5
nmfeeds: [CV] https://t.co/VNeKC4aP2L Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction. Deep l...
nmfeeds: [O] https://t.co/VNeKC4aP2L Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction. Deep le...
Memoirs: Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction. https://t.co/MwW4XvorCy
arxiv_cscv: Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction https://t.co/vUjG6qv8RM
arxiv_cscv: Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction https://t.co/vUjG6qv8RM
None.
None.
###### Other stats
Sample Sizes : [1353, 163]
Authors: 7
Total Words: 5149
Unqiue Words: 1867

##### #8. The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo
###### Ana Bonaca, David W. Hogg, Adrian M. Price-Whelan, Charlie Conroy
We present a model for the interaction of the GD-1 stellar stream with a massive perturber that naturally explains many of the observed stream features, including a gap and an off-stream spur of stars. The model involves an impulse by a fast encounter, after which the stream grows a loop of stars at different orbital energies. At specific viewing angles, this loop appears offset from the stream track. The configuration-space observations are sensitive to the mass, age, impact parameter, and total velocity of the encounter, and future velocity observations will constrain the full velocity vector of the perturber. A quantitative comparison of the spur and gap features prefers models where the perturber is in the mass range of $10^6\,\rm M_\odot$ to $10^8\,\rm M_\odot$. Orbit integrations back in time show that the stream encounter could not have been caused by any known globular cluster or dwarf galaxy, and mass, size and impact-parameter arguments show that it could not have been caused by a molecular cloud in the Milky Way disk....
more | pdf | html
###### Tweets
adamspacemann: This is exciting: astronomers think that a big glob of dark matter could be what disrupted stellar stream GD-1 around the Milky Way https://t.co/Mpdsqy755u
adrianprw: On the #arxiv today: evidence for a dark substructure in the Milky Way halo from the morphology of the GD-1 stream! https://t.co/hyTk9gKW8Y (led by @anabonaca w/ @davidwhogg, Charlie Conroy) and see https://t.co/lixElXikTf -- featuring @ESAGaia DR2 data!
Jos_de_Bruijne: "The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the #MilkyWay halo" https://t.co/w7AyuVzfYZ"We present a model for the interaction of the GD-1 stream with a perturber that naturally explains many of the observed stream features" #GaiaMission #GaiaDR2 https://t.co/FYgoSFf9iR
SaschaCaron: Hint for Dark Matter substructure from stellar streams ?https://t.co/ukobnoMVmQ
anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and privilege to work with @davidwhogg, @adrianprw, Charlie Conroy, and the @ESAGaia data.
AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU
scimichael: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo https://t.co/Wa5dngzFQD
vancalmthout: RT @SaschaCaron: Hint for Dark Matter substructure from stellar streams ?https://t.co/ukobnoMVmQ
adrianprw: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
ReadDark: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
johngizis: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
nbody6: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
Jos_de_Bruijne: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
Motigomeman: RT @AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU
deniserkal: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
Katelinsaurus: RT @AstroPHYPapers: The Spur and the Gap in GD-1: Dynamical evidence for a dark substructure in the Milky Way halo. https://t.co/D7D8uhctXU
isalsalism: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
garavito_nico: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
JeffCarlinastro: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
gorankab: RT @anabonaca: The GD-1 stellar stream might have been perturbed by a dark, massive halo object: https://t.co/YKShO40dLv A pleasure and pri…
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 4
Total Words: 10347
Unqiue Words: 2747

##### #9. TED: Teaching AI to Explain its Decisions
###### Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples.
more | pdf | html
###### Tweets
BrundageBot: TED: Teaching AI to Explain its Decisions. Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, and Aleksandra Mojsilovic https://t.co/Dju2Fzo7ZW
arxivml: "TED: Teaching AI to Explain its Decisions", Noel C． F． Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murr… https://t.co/kmuAkPoShn
nmfeeds: [AI] https://t.co/hGH2tkmKCp TED: Teaching AI to Explain its Decisions. Artificial intelligence systems are being increasi...
nmfeeds: [O] https://t.co/hGH2tkmKCp TED: Teaching AI to Explain its Decisions. Artificial intelligence systems are being increasin...
None.
None.
###### Other stats
Sample Sizes : None.
Authors: 8
Total Words: 6175
Unqiue Words: 2074

##### #10. The Augmented Synthetic Control Method
###### Eli Ben-Michael, Avi Feller, Jesse Rothstein
The synthetic control method (SCM) is a popular approach for estimating the impact of a treatment on a single unit in panel data settings. The "synthetic control" is a weighted average of control units that balances the treated unit's pre-treatment outcomes as closely as possible. The curse of dimensionality, however, means that SCM does not generally achieve exact balance, which can bias the SCM estimate. We propose an extension, Augmented SCM, which uses an outcome model to estimate the bias due to covariate imbalance and then de-biases the original SCM estimate, analogous to bias correction for inexact matching. We motivate this approach by showing that SCM is a (regularized) inverse propensity score weighting estimator, with pre-treatment outcomes as covariates and a ridge penalty on the propensity score coefficients. We give theoretical guarantees for specific cases and propose a new inference procedure. We demonstrate gains from Augmented SCM with extensive simulation studies and apply this framework to canonical...
more | pdf | html
None.
###### Tweets
F_Bethke: New paper explaining the "Augmented Synthetic Control Method" by Ben-Michael et al. Also provides new augsynth #rstats package. https://t.co/2Mok5we0bQ #DataScience #Econometrics
StatsPapers: The Augmented Synthetic Control Method. https://t.co/L1Yigm3QJV
econometriclub: RT @F_Bethke: New paper explaining the "Augmented Synthetic Control Method" by Ben-Michael et al. Also provides new augsynth #rstats package. https://t.co/HrNqJVLnN7 #DataScience #Econometrics
lihua_lei_stat: RT @StatsPapers: The Augmented Synthetic Control Method. https://t.co/L1Yigm3QJV
###### Github

Augmented Synthetic Control Method

Repository: augsynth
User: ebenmichael
Language: R
Stargazers: 2
Subscribers: 0
Forks: 0
Open Issues: 0
None.
###### Other stats
Sample Sizes : None.
Authors: 3
Total Words: 19346
Unqiue Words: 3589

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 56,474 papers.

###### Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Online
###### Stats
Tracking 56,474 papers.