Top 10 Arxiv Papers Today


2.365 Mikeys
#1. PaperRobot: Incremental Draft Generation of Scientific Ideas
Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.
more | pdf | html
Figures
Tweets
hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3 🤖✍️
UberCitizen: Qué ambicioso esto, y qué fascinante. Acaba de ser publicado en https://t.co/i0sQ9kDRqL 👇👇👇 "PaperRobot: Incremental Draft Generation of Scientific Ideas" ─ https://t.co/HDYQC9lXPZ
BrundageBot: PaperRobot: Incremental Draft Generation of Scientific Ideas. Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, and Yi Luan https://t.co/BP4KpcL35I
mandubian: "60% of 6.4 million papers in biomedic & chemistry are about incremental work" Paperbot builds graph knowledge from existing papers, then generate new incremental ideas and their abstract/conclusion and finally title for future follow-on papers... Whoaa :D https://t.co/JjRabY9CwN
keunwoochoi: @bfirsh https://t.co/nybQxVEKUk
jochenleidner: Wang et al.'s PaperRobot: "Turing tests show that PaperRobot-generated output strings are sometimes chosen over human[-]written ones" https://t.co/NDXXRZMfxn #AI #machinelearning #research https://t.co/Pfylhdpc3H
arxivml: "PaperRobot: Incremental Draft Generation of Scientific Ideas", Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knig… https://t.co/4YS1Ib5PXF
Gelarehai: Interesting! "PaperRobot: Incremental Draft Generation of Scientific Ideas" "PaperRobot automatically reads existing papers to build background knowledge graphs, in which nodes are entities/concepts and edges are the relations between these entities" https://t.co/Ps2iT7bUlF https://t.co/lXER2P3zHG
reddit_ml: [R] PaperRobot: Incremental Draft Generation of Scientific Ideas https://t.co/M6ZXlelLrt
bxrobertz: A machine learning paper about building a sophisticated ML model to write machine learning (and other scientific) papers ... https://t.co/gV1q5y6HwR
hereticreader: PaperRobot: Incremental Draft Generation of Scientific Ideas - https://t.co/9D1COPFWPY https://t.co/JktBlbKgD8
LisandroKaunitz: "PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively" https://t.co/qhMpl8ZvWl 🤔
tedherman: next year it will be writing grant proposals https://t.co/leAL5kuluk
malokhem: 드디어 나왔다 딥러닝 기반 논문봇 https://t.co/log7Et1mFA
jeremieclos: "PaperRobot: Incremental Draft Generation of Scientific Ideas". Barely finished my PhD and already on my way to be automated. Life is difficult. https://t.co/Ne7VX1tOPZ
himakotsu: PaperRobot: Incremental Draft Generation of Scientific Ideas. (arXiv:1905.07870v1 [https://t.co/Elc9rIUsHa]) https://t.co/J0MiuuFFYQ
nsatourian: This is really, really interesting. It's exciting to see NLP applied more to accelerating scientific discovery. PaperRobot: Incremental Draft Generation of Scientific Ideas - https://t.co/a8dvJdvhQK
SciFi: PaperRobot: Incremental Draft Generation of Scientific Ideas. https://t.co/locwFgFOzq
jsjeong3: PaperRobot: Incremental Draft Generation of Scientific Ideas https://t.co/NT2vBUNLne
arxiv_cs_LG: PaperRobot: Incremental Draft Generation of Scientific Ideas. Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, and Yi Luan https://t.co/0sZKs3edwk
Alex4386_dev: https://t.co/lldXWCIIeo https://t.co/rB3UMQfiX1 PaperRobot: Incremental Draft Generation of Scientific Ideas 🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔
arxiv_cscl: PaperRobot: Incremental Draft Generation of Scientific Ideas https://t.co/T3NbQUPwfA
arxiv_cscl: PaperRobot: Incremental Draft Generation of Scientific Ideas https://t.co/T3NbQUPwfA
arxiv_cscl: PaperRobot: Incremental Draft Generation of Scientific Ideas https://t.co/T3NbQUxUR0
_ivana__anavi_: PaperRobot: Incremental Draft Generation of Scientific Ideas 🤯 https://t.co/90LfXZT5GB
DataSciNews: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
ceobillionaire: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
ceobillionaire: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
TweetinChar: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
TheAnnaGat: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
kennybastani: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
KyleCranmer: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
cvondrick: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
mapc: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
mohitban47: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
sigitpurnomo: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
KevinFaircloth1: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
drfeldt: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
drfeldt: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
desertnaut: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
sina_lana: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
windx0303: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
bradleypallen: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
KloudStrife: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
ErmiaBivatan: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
gardenfelder: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
puneethmishra: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
urban_stevie: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
marwen_o: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
shivamg: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
matthijsMmaas: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
lorenlugosch: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
commoncitizen01: RT @malokhem: 드디어 나왔다 딥러닝 기반 논문봇 https://t.co/log7Et1mFA
TheLeanAcademic: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
drvicentes: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
ezajko: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
csanhuezalobos: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
MasterScrat: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
rohand24: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
_rahulgopinath: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
Rovio_Red: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
MuhammadThalhah: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
westis96: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
AssistedEvolve: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
UrScienceFriend: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
anoobian: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
rhira2016: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
swapnil_bishnu: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
mnrmja007: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
GuXuemei: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
howardmeng: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
straybrid: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
Mufei_Li: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
0xhexhex: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
saadmrb: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
Ogtaufmixpoike6: RT @malokhem: 드디어 나왔다 딥러닝 기반 논문봇 https://t.co/log7Et1mFA
t_sanfe: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
Charles9n: RT @Miles_Brundage: Very interesting: "PaperRobot: Incremental Draft Generation of Scientific Ideas," Wang et al.: https://t.co/v0zQC8oRB3…
ialhashims: RT @hardmaru: Automating academic supervisors? https://t.co/JhFoRnrwIv https://t.co/eKydraGmf5
Github
Repository: PaperRobot
User: EagleW
Language: None
Stargazers: 16
Subscribers: 11
Forks: 0
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 7
Total Words: 7967
Unqiue Words: 2765

2.217 Mikeys
#2. Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search
Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, Kouhei Nishida
High sensitivity of neural architecture search (NAS) methods against their input such as step-size (i.e., learning rate) and search space prevents practitioners from applying them out-of-the-box to their own problems, albeit its purpose is to automate a part of tuning process. Aiming at a fast, robust, and widely-applicable NAS, we develop a generic optimization framework for NAS. We turn a coupled optimization of connection weights and neural architecture into a differentiable optimization by means of stochastic relaxation. It accepts arbitrary search space (widely-applicable) and enables to employ a gradient-based simultaneous optimization of weights and architecture (fast). We propose a stochastic natural gradient method with an adaptive step-size mechanism built upon our theoretical investigation (robust). Despite its simplicity and no problem-dependent parameter tuning, our method exhibited near state-of-the-art performances with low computational budgets both on image classification and inpainting tasks.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search. Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, and Kouhei Nishida https://t.co/3BXgz33PMt
argv_sat184: Our ICML2019 paper has been available on arXiv! https://t.co/Qx0sfrIb0k We proposed ASNG-NAS, and applied image classification on CIFAR-10 and image restoration on CelebA. ASNG-NAS acquired a structure with the test error 2.76% in 0.11 GPU days! https://t.co/nErqINeU0m
argv_sat184: Our ICML2019 paper has been available on arXiv! https://t.co/kPTj843wOK We proposed ASNG-NAS, and applied image classification on CIFAR-10 and image restoration on CelebA. ASNG-NAS acquired a structure with the test error 2.83% on CIFAR-10 in 0.11 GPU days! https://t.co/n7s6MlOKO6
arxiv_cs_LG: Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search. Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, and Kouhei Nishida https://t.co/PxnNw0SyfL
StatsPapers: Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search. https://t.co/utgLCNCVs5
imenurok: RT @argv_sat184: Our ICML2019 paper has been available on arXiv! https://t.co/Qx0sfrIb0k We proposed ASNG-NAS, and applied image classific…
imenurok: RT @argv_sat184: Our ICML2019 paper has been available on arXiv! https://t.co/kPTj843wOK We proposed ASNG-NAS, and applied image classific…
KokiMadono: RT @argv_sat184: Our ICML2019 paper has been available on arXiv! https://t.co/kPTj843wOK We proposed ASNG-NAS, and applied image classific…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 0
Unqiue Words: 0

2.17 Mikeys
#3. The $q$-multiple gamma functions of Barnes-Milnor type
Hanamichi Kawamura
The multiple gamma functions of BM (Barnes-Milnor) type and the $q$-multiple gamma functions have been studied independently. In this paper, we introduce a new generalization of the multiple gamma functions called the $q$-BM multiple gamma function including those functions and prove some properties the BM multiple gamma functions satisfy for them.
more | pdf | html
Figures
None.
Tweets
691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
ephemeral_shade: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
adhara_mathphys: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
adhara_mathphys: RT @mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
_kohta: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
neet2go: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
sitositositoo: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
math_ter0713: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
yozo_poya1010: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
reviewer_amzn_m: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
691_7758337633: RT @mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
dark_yoshi_math: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
junpi316: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
estis__: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
R_O_R_I_J_O: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
kuma_0437: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
kagekatsu_chs2: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
1killer_snail10: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
dentakumath: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
KonumaTakaki: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
math_elliptic: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
OSAKA_DTC: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
4294967291prime: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
yosswi414_0: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Freufirst: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
kiricat848: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Space_kid_Jr: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
ONEWAN: RT @mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
henzihenzi: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
pxfnc: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Burnt_life: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
ponta_both: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Kobe_La26: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
tyamada1093: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
tyamada1093: RT @mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
tkg5th: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
milm_ac: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Kind_Kings: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
akarisugaku: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
emt7r05: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
rrmg142857: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
rok_r5: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
taxfree_python: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
tks392: RT @mathNTb: Hanamichi Kawamura : The $q$-multiple gamma functions of Barnes-Milnor type https://t.co/0yB78jx7jW https://t.co/6E2i5BF6zQ
tks392: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
wzpghfcda: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
schil_ler: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
TarhoYamada: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
KOUSEI2002RIKEI: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
oimo_pad: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
EGG114514801: RT @691_7758337633: !?!?なにこの論文やば https://t.co/rYurMkQ4Jk
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 1729
Unqiue Words: 556

2.169 Mikeys
#4. Parallel Neural Text-to-Speech
Kainan Peng, Wei Ping, Zhao Song, Kexin Zhao
In this work, we propose a non-autoregressive seq2seq model that converts text to spectrogram. It is fully convolutional and obtains about 17.5 times speed-up over Deep Voice 3 at synthesis while maintaining comparable speech quality using a WaveNet vocoder. Interestingly, it has even fewer attention errors than the autoregressive model on the challenging test sentences. Furthermore, we build the first fully parallel neural text-to-speech system by applying the inverse autoregressive flow~(IAF) as the parallel neural vocoder. Our system can synthesize speech from text through a single feed-forward pass. We also explore a novel approach to train the IAF from scratch as a generative model for raw waveform, which avoids the need for distillation from a separately trained WaveNet.
more | pdf | html
Figures
Tweets
BrundageBot: Parallel Neural Text-to-Speech. Kainan Peng, Wei Ping, Zhao Song, and Kexin Zhao https://t.co/donUIpMPWT
__dhgrs__: Baidu ResearchのDeep Voiceなどを開発したチームの新しいTTSの論文。WaveNetをParallel WaveNetにした要領で、attention機構をParallel化したところ、高速化に成功しただけでなくattentionのエラーも減ったことがポイント。 https://t.co/o9vFo1VFtd
reddit_ml: [R] Parallel Neural Text-to-Speech https://t.co/fuAP6fu3UC
arxiv_cs_LG: Parallel Neural Text-to-Speech. Kainan Peng, Wei Ping, Zhao Song, and Kexin Zhao https://t.co/ffs3sOmu7A
arxiv_cscl: Parallel Neural Text-to-Speech https://t.co/nv9tZroqxo
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 7400
Unqiue Words: 2585

2.168 Mikeys
#5. Lightweight Network Architecture for Real-Time Action Recognition
Alexander Kozlov, Vadim Andronov, Yana Gritsenko
In this work we present a new efficient approach to Human Action Recognition called Video Transformer Network (VTN). It leverages the latest advances in Computer Vision and Natural Language Processing and applies them to video understanding. The proposed method allows us to create lightweight CNN models that achieve high accuracy and real-time speed using just an RGB mono camera and general purpose CPU. Furthermore, we explain how to improve accuracy by distilling from multiple models with different modalities into a single model. We conduct a comparison with state-of-the-art methods and show that our approach performs on par with most of them on famous Action Recognition datasets. We benchmark the inference time of the models using the modern inference framework and argue that our approach compares favorably with other methods in terms of speed/accuracy trade-off, running at 56 FPS on CPU. The models and the training code are available.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Lightweight Network Architecture for Real-Time Action Recognition. Alexander Kozlov, Vadim Andronov, and Yana Gritsenko https://t.co/Ap1z82Vc7E
ZFPhalanx: Lightweight Network Architecture for Real-Time Action Recognition https://t.co/TMA0vk6QGP 「decoder that integrates intra-frame temporal information ...」🤔
SciFi: Lightweight Network Architecture for Real-Time Action Recognition. https://t.co/6OWchP7e0s
arxiv_cs_LG: Lightweight Network Architecture for Real-Time Action Recognition. Alexander Kozlov, Vadim Andronov, and Yana Gritsenko https://t.co/KHMcFBDSzN
arxiv_cscv: Lightweight Network Architecture for Real-Time Action Recognition https://t.co/trqC00qw9l
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

2.163 Mikeys
#6. Towards Neural Decompilation
Omer Katz, Yuval Olshaker, Yoav Goldberg, Eran Yahav
We address the problem of automatic decompilation, converting a program in low-level representation back to a higher-level human-readable programming language. The problem of decompilation is extremely important for security researchers. Finding vulnerabilities and understanding how malware operates is much easier when done over source code. The importance of decompilation has motivated the construction of hand-crafted rule-based decompilers. Such decompilers have been designed by experts to detect specific control-flow structures and idioms in low-level code and lift them to source level. The cost of supporting additional languages or new language features in these models is very high. We present a novel approach to decompilation based on neural machine translation. The main idea is to automatically learn a decompiler from a given compiler. Given a compiler from a source language S to a target language T , our approach automatically trains a decompiler that can translate (decompile) T back to S . We used our framework to...
more | pdf | html
Figures
Tweets
Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
BrundageBot: Towards Neural Decompilation. Omer Katz, Yuval Olshaker, Yoav Goldberg, and Eran Yahav https://t.co/6CrnJQCutq
NandoDF: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
judegomila: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
rquintino: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
mrdrozdov: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
suvsh: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
dmi_paras: RT @Miles_Brundage: "Towards Neural Decompilation," Katz et al.: https://t.co/MLIHbuntRy
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 13606
Unqiue Words: 3185

2.159 Mikeys
#7. Quantum Fluctuations at the Planck Scale
Fulvio Melia
The recently measured cutoff, k_min=[4.34(+/-)0.50]/r_cmb (with r_cmb the comoving distance to the last scattering surface), in the fluctuation spectrum of the cosmic microwave background, appears to disfavor slow-roll inflation and the associated transition of modes across the horizon. We show in this Letter that k_min instead corresponds to the first mode emerging out of the Planck domain into the semi-classical universe. The required scalar-field potential is exponential, though not inflationary, and satisfies the zero active mass condition, rho_phi+3p_phi=0. Quite revealingly, the observed amplitude of the temperature anisotropies requires the quantum fluctuations in phi to have classicalized at ~3.5x10^15 GeV, consistent with the energy scale in grand unified theories. Such scalar-field potentials are often associated with Kaluza-Klein cosmologies, string theory and even supergravity.
more | pdf | html
Figures
None.
Tweets
life_wont_wait: この観測も全然知らないんだけどスローロールが死ぬなんて話があるのか?/Quantum Fluctuations at the Planck Scale https://t.co/33gPkDV7mY
dajmeyer: .@FulvioMelia, Quantum Fluctuations at Planck Scale https://t.co/gvKRqlNpOl "missing is firm understanding of how classicalization converts homogeneous, isotropic quantum fluctuations into classical anisotropies..prob w/all models invoking a quantum origin for the perturbations"
StarshipBuilder: Quantum Fluctuations at the Planck Scale https://t.co/pu21i9SndR
HEPPhenoPapers: Quantum Fluctuations at the Planck Scale. https://t.co/xgJLKaRTha
s_dual: RT @life_wont_wait: この観測も全然知らないんだけどスローロールが死ぬなんて話があるのか?/Quantum Fluctuations at the Planck Scale https://t.co/33gPkDV7mY
BryanKeIIy: RT @qraal: [1905.08626] Quantum Fluctuations at the Planck Scale https://t.co/t4S9tOGQkG
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 0
Unqiue Words: 0

2.157 Mikeys
#8. CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Roberto Dessì, Marco Baroni
Lake and Baroni (2018) introduced the SCAN dataset probing the ability of seq2seq models to capture compositional generalizations, such as inferring the meaning of "jump around" 0-shot from the component words. Recurrent networks (RNNs) were found to completely fail the most challenging generalization cases. We test here a convolutional network (CNN) on these tasks, reporting hugely improved performance with respect to RNNs. Despite the big improvement, the CNN has however not induced systematic rules, suggesting that the difference between compositional and non-compositional behaviour is not clear-cut.
more | pdf | html
Figures
None.
Tweets
BrundageBot: CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. Roberto Dessì and Marco Baroni https://t.co/H4BjnPmPe3
SciFi: CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. https://t.co/WHsNLIbEA5
arxiv_cs_LG: CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. Roberto Dessì and Marco Baroni https://t.co/OiH6X7YLWU
arxiv_cscl: CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks https://t.co/NfsG95WqkP
arxiv_cscl: CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks https://t.co/NfsG95WqkP
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 0
Unqiue Words: 0

2.154 Mikeys
#9. Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita
Question answering (QA) using textual sources such as reading comprehension (RC) has attracted much attention recently. This study focuses on the task of explainable multi-hop QA, which requires the system to return the answer with evidence sentences by reasoning and gathering disjoint pieces of the reference texts. For evidence extraction of explainable multi-hop QA, the existed method extracted evidence sentences by evaluating the importance of each sentence independently. In this study, we propose the Query Focused Extractor (QFE) model and introduce the multi-task learning of the QA model for answer selection and the QFE model for evidence extraction. QFE sequentially extracts the evidence sentences by an RNN with an attention mechanism to the question sentence, which is inspired by extractive summarization models. It enables QFE to consider the dependency among the evidence sentences and cover the important information in the question sentence. Experimental results show that QFE with the simple RC baseline model achieves a...
more | pdf | html
Figures
None.
Tweets
BrundageBot: Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction. Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita https://t.co/PsIowC4yuA
kyoun: Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! https://t.co/RdGq0wokpW We propose a query-based extractive summarization model, QFE, and the multi-task learning of QA and evidence extraction. Our model achieves SOTA in evidence extraction on HotpotQA! https://t.co/cG1KNJM9jN
arxiv_cscl: Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction https://t.co/W5GxOPjD6q
ymym3412: RT @kyoun: Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! https://t.co/RdGq0wokpW We propose a query-based extra…
devoidikk: RT @kyoun: Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! https://t.co/RdGq0wokpW We propose a query-based extra…
sobamchan: RT @kyoun: Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! https://t.co/RdGq0wokpW We propose a query-based extra…
rose_miura: RT @kyoun: Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! https://t.co/RdGq0wokpW We propose a query-based extra…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 7
Total Words: 0
Unqiue Words: 0

2.139 Mikeys
#10. Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework
Dirk Beyer, Heike Wehrheim
The goal of cooperative verification is to combine verification approaches in such a way that they work together to verify a system model. In particular, cooperative verifiers provide exchangeable information (verification artifacts) to other verifiers or consume such information from other verifiers with the goal of increasing the overall effectiveness and efficiency of the verification process. This paper first gives an overview over approaches for leveraging strengths of different techniques, algorithms, and tools in order to increase the power and abilities of the state of the art in software verification. Second, we specifically outline cooperative verification approaches and discuss their employed verification artifacts. We formalize all artifacts in a uniform way, thereby fixing their semantics and providing verifiers with a precise meaning of the exchanged information.
more | pdf | html
Figures
Tweets
Github

An Exchange Format for Verification Witnesses

Repository: sv-witnesses
User: sosy-lab
Language: Python
Stargazers: 8
Subscribers: 7
Forks: 3
Open Issues: 2
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 9303
Unqiue Words: 2787

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 129,961 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Categories
All
Astrophysics
Cosmology and Nongalactic Astrophysics
Earth and Planetary Astrophysics
Astrophysics of Galaxies
High Energy Astrophysical Phenomena
Instrumentation and Methods for Astrophysics
Solar and Stellar Astrophysics
Condensed Matter
Disordered Systems and Neural Networks
Mesoscale and Nanoscale Physics
Materials Science
Other Condensed Matter
Quantum Gases
Soft Condensed Matter
Statistical Mechanics
Strongly Correlated Electrons
Superconductivity
Computer Science
Artificial Intelligence
Hardware Architecture
Computational Complexity
Computational Engineering, Finance, and Science
Computational Geometry
Computation and Language
Cryptography and Security
Computer Vision and Pattern Recognition
Computers and Society
Databases
Distributed, Parallel, and Cluster Computing
Digital Libraries
Discrete Mathematics
Data Structures and Algorithms
Emerging Technologies
Formal Languages and Automata Theory
General Literature
Graphics
Computer Science and Game Theory
Human-Computer Interaction
Information Retrieval
Information Theory
Machine Learning
Logic in Computer Science
Multiagent Systems
Multimedia
Mathematical Software
Numerical Analysis
Neural and Evolutionary Computing
Networking and Internet Architecture
Other Computer Science
Operating Systems
Performance
Programming Languages
Robotics
Symbolic Computation
Sound
Software Engineering
Social and Information Networks
Systems and Control
Economics
Econometrics
General Economics
Theoretical Economics
Electrical Engineering and Systems Science
Audio and Speech Processing
Image and Video Processing
Signal Processing
General Relativity and Quantum Cosmology
General Relativity and Quantum Cosmology
High Energy Physics - Experiment
High Energy Physics - Experiment
High Energy Physics - Lattice
High Energy Physics - Lattice
High Energy Physics - Phenomenology
High Energy Physics - Phenomenology
High Energy Physics - Theory
High Energy Physics - Theory
Mathematics
Commutative Algebra
Algebraic Geometry
Analysis of PDEs
Algebraic Topology
Classical Analysis and ODEs
Combinatorics
Category Theory
Complex Variables
Differential Geometry
Dynamical Systems
Functional Analysis
General Mathematics
General Topology
Group Theory
Geometric Topology
History and Overview
Information Theory
K-Theory and Homology
Logic
Metric Geometry
Mathematical Physics
Numerical Analysis
Number Theory
Operator Algebras
Optimization and Control
Probability
Quantum Algebra
Rings and Algebras
Representation Theory
Symplectic Geometry
Spectral Theory
Statistics Theory
Mathematical Physics
Mathematical Physics
Nonlinear Sciences
Adaptation and Self-Organizing Systems
Chaotic Dynamics
Cellular Automata and Lattice Gases
Pattern Formation and Solitons
Exactly Solvable and Integrable Systems
Nuclear Experiment
Nuclear Experiment
Nuclear Theory
Nuclear Theory
Physics
Accelerator Physics
Atmospheric and Oceanic Physics
Applied Physics
Atomic and Molecular Clusters
Atomic Physics
Biological Physics
Chemical Physics
Classical Physics
Computational Physics
Data Analysis, Statistics and Probability
Physics Education
Fluid Dynamics
General Physics
Geophysics
History and Philosophy of Physics
Instrumentation and Detectors
Medical Physics
Optics
Plasma Physics
Popular Physics
Physics and Society
Space Physics
Quantitative Biology
Biomolecules
Cell Behavior
Genomics
Molecular Networks
Neurons and Cognition
Other Quantitative Biology
Populations and Evolution
Quantitative Methods
Subcellular Processes
Tissues and Organs
Quantitative Finance
Computational Finance
Economics
General Finance
Mathematical Finance
Portfolio Management
Pricing of Securities
Risk Management
Statistical Finance
Trading and Market Microstructure
Quantum Physics
Quantum Physics
Statistics
Applications
Computation
Methodology
Machine Learning
Other Statistics
Statistics Theory
Feedback
Online
Stats
Tracking 129,961 papers.