Top 10 Arxiv Papers Today


2.059 Mikeys
#1. PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable
Siqi Bao, Huang He, Fan Wang, Hua Wu
Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle with the natural born one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
more | pdf | html
Figures
None.
Tweets
BrundageBot: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable. Siqi Bao, Huang He, Fan Wang, and Hua Wu https://t.co/iGD5N23csk
roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.co/rykVKOnFXg github: https://t.co/t117VUGZUk https://t.co/2mAvTZAf58
arxiv_in_review: #acl2019nlp PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable. (arXiv:1910.07931v1 [cs\.CL]) https://t.co/wDmHyfoLHJ
arxivml: "PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable", Siqi Bao, Huang He, Fan Wang, Hua Wu https://t.co/e5G3y1MKQ0
arxiv_cscl: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable https://t.co/KcCBInIYeh
arxiv_cscl: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable https://t.co/KcCBInIYeh
arxiv_cscl: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable https://t.co/KcCBInIYeh
arxiv_cscl: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable https://t.co/KcCBInIYeh
arnicas: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
ceobillionaire: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
evolvingstuff: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
EricSchles: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
ialuronico: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
philip368320: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
KouroshMeshgi: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
balicea1: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
dannyehb: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
BedabrataChoud: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
AndroidBlogger: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
Pol09122455: RT @roadrunning01: PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable pdf: https://t.co/oDYCypV0XD abs: https://t.c…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 0
Unqiue Words: 0

2.049 Mikeys
#2. Visual Hide and Seek
Boyuan Chen, Shuran Song, Hod Lipson, Carl Vondrick
We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. We place a variety of obstacles in the environment for the prey to hide behind, and we only give the agents partial observations of their environment using an egocentric perspective. Although we train the model to play this game from scratch, experiments and visualizations suggest that the agent learns to predict its own visibility in the environment. Furthermore, we quantitatively analyze how agent weaknesses, such as slower speed, effect the learned policy. Our results suggest that, although agent weaknesses make the learning problem more challenging, they also cause more useful features to be learned. Our project website is available at: http://www.cs.columbia.edu/ ~bchen/visualhideseek/.
more | pdf | html
Figures
Tweets
arxivml: "Visual Hide and Seek", Boyuan Chen, Shuran Song, Hod Lipson, Carl Vondrick https://t.co/NKjjRh6IA2
helixdomesticus: Artık bilgisayarları satranç ve go oyunlarında yenmek mümkün değil. Saklambaç için de fazla zamanımız kalmamış anlaşılan. Hala yenebiliyorken bol bol oynayıp yenelim bilgisayarları saklambaçta! https://t.co/jomp8G7zTa
SciFi: Visual Hide and Seek. https://t.co/6E8BPIuenq
arxiv_cscv: Visual Hide and Seek https://t.co/OsreIc7h3B
arxiv_cscv: Visual Hide and Seek https://t.co/OsreIc7h3B
arxiv_cscv: Visual Hide and Seek https://t.co/OsreIc7h3B
arxiv_cscv: Visual Hide and Seek https://t.co/OsreIc7h3B
hardmaru: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
masafumi: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
CSProfKGD: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
maggie_albrecht: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
EricSchles: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
KageKirin: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
KouroshMeshgi: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
Nbring: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
HengjianJia: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
camilodoa: RT @roadrunning01: Visual Hide and Seek pdf: https://t.co/dbxvdPwZVl abs: https://t.co/KSsU8hQQKN project page: https://t.co/avKeboftEA htt…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 7156
Unqiue Words: 2438

2.045 Mikeys
#3. Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets
Yogesh Balaji, Tom Goldstein, Judy Hoffman
Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks. Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set. We hypothesize that this poor generalization is a consequence of adversarial training with uniform perturbation radius around every training sample. Samples close to decision boundary can be morphed into a different class under a small perturbation budget, and enforcing large margins around these samples produce poor decision boundaries that generalize poorly. Motivated by this hypothesis, we propose instance adaptive adversarial training -- a technique that enforces sample-specific perturbation margins around every training sample. We show that using our approach, test accuracy on unperturbed samples improve with a marginal drop in robustness. Extensive experiments on CIFAR-10, CIFAR-100 and Imagenet datasets demonstrate the effectiveness of our proposed approach.
more | pdf | html
Figures
Tweets
BrundageBot: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. Yogesh Balaji, Tom Goldstein, and Judy Hoffman https://t.co/i76D7VJte3
arxivml: "Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets", Yogesh Balaji, Tom Goldstein,… https://t.co/e6mRmbOF64
arxiv_cs_LG: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. Yogesh Balaji, Tom Goldstein, and Judy Hoffman https://t.co/U5KJaHy0cI
StatsPapers: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. https://t.co/Ve3hhjB99D
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEn7d7
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEn7d7
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEn7d7
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEEI4F
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEn7d7
arxiv_cscv: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets https://t.co/B2eBfEn7d7
arxiv_cs_cv_pr: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. Yogesh Balaji, Tom Goldstein, and Judy Hoffman https://t.co/2MjFFbhqU8
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 5254
Unqiue Words: 1793

2.045 Mikeys
#4. Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction
Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, Raquel Urtasun
Self-driving vehicles plan around both static and dynamic objects, applying predictive models of behavior to estimate future locations of the objects in the environment. However, future behavior is inherently uncertain, and models of motion that produce deterministic outputs are limited to short timescales. Particularly difficult is the prediction of human behavior. In this work, we propose the discrete residual flow network (DRF-Net), a convolutional neural network for human motion prediction that captures the uncertainty inherent in long-range motion forecasting. In particular, our learned network effectively captures multimodal posteriors over future human motion by predicting and updating a discretized distribution over spatial locations. We compare our model against several strong competitors and show that our model outperforms all baselines.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, and Raquel Urtasun https://t.co/ZyjOzYEutS
arxivml: "Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction", Ajay Jain, Sergio Casas, Renjie Liao, Yu… https://t.co/Ws4pFxSY6z
lrjconan: Predicting the future behaviour of pedestrians with uncertainty is important for self-driving. We propose a Discrete Residual Flow Network which leverages map information and captures the multi-modality. Check our new #CoRL2019 paper: https://t.co/g3bAKubBVZ https://t.co/pVZC8mqILs
arxiv_cs_LG: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, and Raquel Urtasun https://t.co/Hw758rXsU4
Memoirs: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. https://t.co/jYPQkD5HMK
arxiv_cscv: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction https://t.co/d88DO3PyAO
arxiv_cscv: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction https://t.co/d88DO3PyAO
arxiv_cscv: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction https://t.co/d88DO3PyAO
arxiv_cscv: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction https://t.co/d88DO3PyAO
arxiv_cs_cv_pr: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, and Raquel Urtasun https://t.co/fRvwnpZkxM
HubBucket: RT @Memoirs: Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction. https://t.co/jYPQkD5HMK
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 7
Total Words: 0
Unqiue Words: 0

2.043 Mikeys
#5. Convolutional Character Networks
Linjie Xing, Zhi Tian, Weilin Huang, Matthew R. Scott
Recent progress has been made on developing a unified framework for joint text detection and recognition in natural images, but existing joint models were mostly built on two-stage framework by involving ROI pooling, which can degrade the performance on recognition task. In this work, we propose convolutional character networks, referred as CharNet, which is an one-stage model that can process two tasks simultaneously in one pass. CharNet directly outputs bounding boxes of words and characters, with corresponding character labels. We utilize character as basic element, allowing us to overcome the main difficulty of existing approaches that attempted to optimize text detection jointly with a RNN-based recognition branch. In addition, we develop an iterative character detection approach able to transform the ability of character detection learned from synthetic data to real-world images. These technical improvements result in a simple, compact, yet powerful one-stage model that works reliably on multi-orientation and curved text. We...
more | pdf | html
Figures
Tweets
BrundageBot: Convolutional Character Networks. Linjie Xing, Zhi Tian, Weilin Huang, and Matthew R. Scott https://t.co/h6I8vUk3Ar
arxivml: "Convolutional Character Networks", Linjie Xing, Zhi Tian, Weilin Huang, Matthew R. Scott https://t.co/bqfxEAk07C
arxiv_cscv: Convolutional Character Networks https://t.co/yYMItjdVoF
arxiv_cscv: Convolutional Character Networks https://t.co/yYMItiWkx7
arxiv_cscv: Convolutional Character Networks https://t.co/yYMItiWkx7
arxiv_cscv: Convolutional Character Networks https://t.co/yYMItiWkx7
arxiv_cscv: Convolutional Character Networks https://t.co/yYMItiWkx7
arxiv_cs_cv_pr: Convolutional Character Networks. Linjie Xing, Zhi Tian, Weilin Huang, and Matthew R. Scott https://t.co/feGE5hVBjl
keylinker: RT @arxiv_cscv: Convolutional Character Networks https://t.co/yYMItiWkx7
Github

CharNet: Convolutional Character Networks

Repository: research-charnet
User: MalongTech
Language: Python
Stargazers: 20
Subscribers: 5
Forks: 4
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 7049
Unqiue Words: 1914

2.041 Mikeys
#6. Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control
Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, Thomas Brox
We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards. Rather than designing shaped reward functions, ACGD adaptively sets the appropriate task difficulty for the learner by controlling where to sample from the demonstration trajectories and which set of simulation parameters to use. We show that training vision-based control policies in simulation while gradually increasing the difficulty of the task via ACGD improves the policy transfer to the real world. The degree of domain randomization is also gradually increased through the task difficulty. We demonstrate zero-shot transfer for two real-world manipulation tasks: pick-and-stow and block stacking.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control. Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, and Thomas Brox https://t.co/B4nLCLuGl7
arxivml: "Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control", Lukas Hermann, Max Argus, … https://t.co/YalFHbZXGM
arxiv_cs_LG: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control. Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, and Thomas Brox https://t.co/EJ6nm1SPnZ
Memoirs: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control. https://t.co/V40gBEwK8I
arxiv_cscv: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control https://t.co/T7CUokIS4r
arxiv_cscv: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control https://t.co/T7CUol0tt1
arxiv_cscv: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control https://t.co/T7CUol0tt1
arxiv_cscv: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control https://t.co/T7CUol0tt1
arxiv_cscv: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control https://t.co/T7CUol0tt1
arxiv_cs_cv_pr: Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control. Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, and Thomas Brox https://t.co/ROhlHBx86F
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 0
Unqiue Words: 0

2.041 Mikeys
#7. What would happen if we were about 1 pc away from a supermassive black hole?
Lorenzo Iorio
We consider a hypothetic planet with the same mass $m$, radius $R$, angular momentum $\mathbf S$, oblateness $J_2$, semimajor axis $a$, eccentricity $e$, inclination $I$, and obliquity $\varepsilon$ of the Earth orbiting a main sequence star with the same mass $M_\star$ and radius $R_\star$ of the Sun at a distance $r_\bullet \simeq 1\,\mathrm{parsec}\,\left(\mathrm{pc}\right)$ from a supermassive black hole in the center of the hosting galaxy with the same mass $M_\bullet$ of, say, $\mathrm{M87}^\ast$. We preliminarily investigate some dynamical consequences of its presence in the neighbourhood of such a stellar system on the planet's possibility of sustaining complex life over the eons. In particular, we obtain general analytic expressions for the long-term rates of change, doubly averaged over both the planetary and the galactocentric orbital periods $P_\mathrm{b}$ and $P_\bullet$, of $e,\,I,\,\varepsilon$, which are the main quantities directly linked to the stellar insolation. We find that, for certain orbital configurations,...
more | pdf | html
Figures
None.
Tweets
CosmicRami: Paper: If you took an M87 supermassive black hole and placed it 3.26 light years away it might: - impact development of complex life - under right conditions cause planet to slam into star Death by SMBH sounds so much more fancy than killer asteroids! https://t.co/ljAPQLWpS4
norita_kawanaka: 「もし太陽系が超大質量ブラックホールから1パーセク(~3光年)の距離にあったら」という論文 / What would happen if we were about 1 pc away from a supermassive black hole? - https://t.co/yQWhV9MX3u
qraal: [1910.07760] What would happen if we were about 1 pc away from a supermassive black hole? https://t.co/ebUeFVC294
nick_attree: Well there's an eye catching arxiv tittle "What would happen if we were about 1 pc away from a supermassive black hole?" Surprisingly, we would not all die screaming immediately https://t.co/Lr70gzlXDs
whitequark: "What would happen if we were about 1 pc away from a supermassive black hole?" https://t.co/iZ7Ohm2Qq9
OSablin: "What would happen if we were about 1 pc away from a supermassive black hole?. (arXiv:1910.07760v1 [astro-ph.EP])" https://t.co/BowzuWDwdN
StarshipBuilder: What would happen if we were about 1 pc away from a supermassive black hole? https://t.co/1xHahwChwQ
RelativityPaper: What would happen if we were about 1 pc away from a supermassive black hole?. https://t.co/N1WnD7W10g
spacearcheology: RT @qraal: [1910.07760] What would happen if we were about 1 pc away from a supermassive black hole? https://t.co/ebUeFVC294
nyrath: RT @qraal: [1910.07760] What would happen if we were about 1 pc away from a supermassive black hole? https://t.co/ebUeFVC294
Laintal: RT @qraal: [1910.07760] What would happen if we were about 1 pc away from a supermassive black hole? https://t.co/ebUeFVC294
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 5462
Unqiue Words: 1609

2.041 Mikeys
#8. Universal Text Representation from BERT: An Empirical Study
Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks. Sentence-level embeddings are evaluated against two state-of-the-art models on downstream and probing tasks from SentEval, while passage-level embeddings are evaluated on four question-answering (QA) datasets under a learning-to-rank problem setting. Embeddings from the pre-trained BERT model perform poorly in semantic similarity and sentence surface information probing tasks. Fine-tuning BERT on natural language inference data greatly improves the quality of the embeddings. Combining embeddings from different BERT layers can further boost performance. BERT embeddings outperform BM25 baseline significantly on factoid QA datasets at the passage level, but fail to perform better than BM25 on non-factoid datasets. For all QA datasets, there is a gap between embedding-based method and in-domain fine-tuned BERT (we...
more | pdf | html
Figures
None.
Tweets
BrundageBot: Universal Text Representation from BERT: An Empirical Study. Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang https://t.co/DGFAyDfe9g
arxiv_in_review: #acl2019nlp Universal Text Representation from BERT: An Empirical Study. (arXiv:1910.07973v1 [cs\.CL]) https://t.co/2otlXDd8gO
arxivml: "Universal Text Representation from BERT: An Empirical Study", Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, … https://t.co/UzRAnaVNCB
arxiv_cs_LG: Universal Text Representation from BERT: An Empirical Study. Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang https://t.co/3hwEyAN25n
Memoirs: Universal Text Representation from BERT: An Empirical Study. https://t.co/eX7snyvAKg
arxiv_cscl: Universal Text Representation from BERT: An Empirical Study https://t.co/JftEOSX0Bg
arxiv_cscl: Universal Text Representation from BERT: An Empirical Study https://t.co/JftEOSX0Bg
arxiv_cscl: Universal Text Representation from BERT: An Empirical Study https://t.co/JftEOSX0Bg
arxiv_cscl: Universal Text Representation from BERT: An Empirical Study https://t.co/JftEOSX0Bg
HubBucket: RT @Memoirs: Universal Text Representation from BERT: An Empirical Study. https://t.co/eX7snyvAKg
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 5
Total Words: 0
Unqiue Words: 0

2.041 Mikeys
#9. The dark matter component of the Gaia radially anisotropic substructure
Nassim Bozorgnia, Azadeh Fattahi, Carlos S. Frenk, Andrew Cheek, David G. Cerdeno, Facundo A. Gómez, Robert J. J. Grand, Federico Marinacci
We study the properties of the dark matter component of the radially anisotropic stellar population recently identified in the Gaia data, using magneto-hydrodynamical simulations of Milky Way-like halos from the Auriga project. We identify 10 simulated galaxies that approximately match the rotation curve and stellar mass of the Milky Way. Four of these have an anisotropic stellar population reminiscent of the Gaia structure. We find an anti-correlation between the dark matter mass fraction of this population in the Solar neighbourhood and its orbital anisotropy. We estimate the local dark matter density and velocity distribution for halos with and without the anisotropic stellar population, and use them to simulate the signals expected in future xenon and germanium direct detection experiments. We find that a generalized Maxwellian distribution fits the dark matter halo integrals of the Milky Way-like halos containing the radially anisotropic stellar population. For dark matter particle masses below approximately 10 GeV, direct...
more | pdf | html
Figures
None.
Tweets
IPPP_Durham: New IPPP paper! "The dark matter component of the Gaia radially anisotropic substructure" https://t.co/Q4utzaW10G https://t.co/UUsx0CnHpL
Jos_de_Bruijne: "The dark matter component of the #GaiaMission radially anisotropic substructure" https://t.co/jVzIhhl00r "For DM particle masses < ~10 GeV, direct detection exclusion limits [...] show a mild shift towards smaller masses compared to the Standard Halo Model" #GaiaDR2 https://t.co/FTMETjLBIF
higgsinocat: The dark matter component of the Gaia radially anisotropic substructure. (arXiv:1910.07536v1 [https://t.co/qoXZjNJgOA]) relevance:100% https://t.co/UMhaBDr03V #darkmatter @ESAGaia @DGCerdeno https://t.co/LnIHF2uWHP
DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga simulations and derive implications for direct #darkmatter detection experiments such as #SuperCDMS and @lzdarkmatter @Xenon1T @IPPP_Durham @DarkerMatters https://t.co/dUhnp78aSe
scimichael: The dark matter component of the Gaia radially anisotropic substructure https://t.co/eOwjXCAS03
HEPPhenoPapers: The dark matter component of the Gaia radially anisotropic substructure. https://t.co/WToV4elpv8
GregorioBaquer5: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
DarkerMatters: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
StefanJordanARI: RT @Jos_de_Bruijne: "The dark matter component of the #GaiaMission radially anisotropic substructure" https://t.co/jVzIhhl00r "For DM parti…
C_Weniger: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
GrumpyScientist: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
AHEPGroup: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
DurhamRdm: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
baptisteravina: RT @DGCerdeno: In our new article, https://t.co/6fhELSlYSV, we explore the @ESAGaia radially anisotropic substructure 🥒 using the Auriga si…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 8
Total Words: 13454
Unqiue Words: 3061

2.04 Mikeys
#10. Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics
Oier Mees, Maxim Tatarchenko, Thomas Brox, Wolfram Burgard
We present a convolutional neural network for joint 3D shape prediction and viewpoint estimation from a single input image. During training, our network gets the learning signal from a silhouette of an object in the input image - a form of self-supervision. It does not require ground truth data for 3D shapes and the viewpoints. Because it relies on such a weak form of supervision, our approach can easily be applied to real-world data. We demonstrate that our method produces reasonable qualitative and quantitative results on natural images for both shape estimation and viewpoint prediction. Unlike previous approaches, our method does not require multiple views of the same object instance in the dataset, which significantly expands the applicability in practical robotics scenarios. We showcase it by using the hallucinated shapes to improve the performance on the task of grasping real-world objects both in simulation and with a PR2 robot.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics. Oier Mees, Maxim Tatarchenko, Thomas Brox, and Wolfram Burgard https://t.co/wHxAloRsPk
arxivml: "Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics", Oier Mees, Maxim Tatarchenko, … https://t.co/9ZcuFCE3d9
arxiv_cs_LG: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics. Oier Mees, Maxim Tatarchenko, Thomas Brox, and Wolfram Burgard https://t.co/NEYqqFcYGy
Memoirs: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics. https://t.co/41O2YYFXoT
arxiv_cscv: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics https://t.co/lTKUH6csId
arxiv_cscv: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics https://t.co/lTKUH6csId
arxiv_cscv: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics https://t.co/lTKUH6csId
arxiv_cscv: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics https://t.co/lTKUH6csId
arxiv_cs_cv_pr: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics. Oier Mees, Maxim Tatarchenko, Thomas Brox, and Wolfram Burgard https://t.co/o1LkfwIUb4
databytz: RT @arxiv_cscv: Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics https://t.co/lTKUH6csId
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 0
Unqiue Words: 0

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 208,410 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Categories
All
Astrophysics
Cosmology and Nongalactic Astrophysics
Earth and Planetary Astrophysics
Astrophysics of Galaxies
High Energy Astrophysical Phenomena
Instrumentation and Methods for Astrophysics
Solar and Stellar Astrophysics
Condensed Matter
Disordered Systems and Neural Networks
Mesoscale and Nanoscale Physics
Materials Science
Other Condensed Matter
Quantum Gases
Soft Condensed Matter
Statistical Mechanics
Strongly Correlated Electrons
Superconductivity
Computer Science
Artificial Intelligence
Hardware Architecture
Computational Complexity
Computational Engineering, Finance, and Science
Computational Geometry
Computation and Language
Cryptography and Security
Computer Vision and Pattern Recognition
Computers and Society
Databases
Distributed, Parallel, and Cluster Computing
Digital Libraries
Discrete Mathematics
Data Structures and Algorithms
Emerging Technologies
Formal Languages and Automata Theory
General Literature
Graphics
Computer Science and Game Theory
Human-Computer Interaction
Information Retrieval
Information Theory
Machine Learning
Logic in Computer Science
Multiagent Systems
Multimedia
Mathematical Software
Numerical Analysis
Neural and Evolutionary Computing
Networking and Internet Architecture
Other Computer Science
Operating Systems
Performance
Programming Languages
Robotics
Symbolic Computation
Sound
Software Engineering
Social and Information Networks
Systems and Control
Economics
Econometrics
General Economics
Theoretical Economics
Electrical Engineering and Systems Science
Audio and Speech Processing
Image and Video Processing
Signal Processing
General Relativity and Quantum Cosmology
General Relativity and Quantum Cosmology
High Energy Physics - Experiment
High Energy Physics - Experiment
High Energy Physics - Lattice
High Energy Physics - Lattice
High Energy Physics - Phenomenology
High Energy Physics - Phenomenology
High Energy Physics - Theory
High Energy Physics - Theory
Mathematics
Commutative Algebra
Algebraic Geometry
Analysis of PDEs
Algebraic Topology
Classical Analysis and ODEs
Combinatorics
Category Theory
Complex Variables
Differential Geometry
Dynamical Systems
Functional Analysis
General Mathematics
General Topology
Group Theory
Geometric Topology
History and Overview
Information Theory
K-Theory and Homology
Logic
Metric Geometry
Mathematical Physics
Numerical Analysis
Number Theory
Operator Algebras
Optimization and Control
Probability
Quantum Algebra
Rings and Algebras
Representation Theory
Symplectic Geometry
Spectral Theory
Statistics Theory
Mathematical Physics
Mathematical Physics
Nonlinear Sciences
Adaptation and Self-Organizing Systems
Chaotic Dynamics
Cellular Automata and Lattice Gases
Pattern Formation and Solitons
Exactly Solvable and Integrable Systems
Nuclear Experiment
Nuclear Experiment
Nuclear Theory
Nuclear Theory
Physics
Accelerator Physics
Atmospheric and Oceanic Physics
Applied Physics
Atomic and Molecular Clusters
Atomic Physics
Biological Physics
Chemical Physics
Classical Physics
Computational Physics
Data Analysis, Statistics and Probability
Physics Education
Fluid Dynamics
General Physics
Geophysics
History and Philosophy of Physics
Instrumentation and Detectors
Medical Physics
Optics
Plasma Physics
Popular Physics
Physics and Society
Space Physics
Quantitative Biology
Biomolecules
Cell Behavior
Genomics
Molecular Networks
Neurons and Cognition
Other Quantitative Biology
Populations and Evolution
Quantitative Methods
Subcellular Processes
Tissues and Organs
Quantitative Finance
Computational Finance
Economics
General Finance
Mathematical Finance
Portfolio Management
Pricing of Securities
Risk Management
Statistical Finance
Trading and Market Microstructure
Quantum Physics
Quantum Physics
Statistics
Applications
Computation
Methodology
Machine Learning
Other Statistics
Statistics Theory
Feedback
Online
Stats
Tracking 208,410 papers.