Top 10 Arxiv Papers Today


2.352 Mikeys
#1. Dark Matter Strikes Back at the Galactic Center
Rebecca K. Leane, Tracy R. Slatyer
Statistical evidence has previously suggested that the Galactic Center GeV Excess (GCE) originates largely from point sources, and not from annihilating dark matter. We examine the impact of unmodeled source populations on identifying the true origin of the GCE using non-Poissonian template fitting (NPTF) methods. In a proof-of-principle example with simulated data, we discover that unmodeled sources in the Fermi Bubbles can lead to a dark matter signal being misattributed to point sources by the NPTF. We discover striking behavior consistent with a mismodeling effect in the real Fermi data, finding that large artificial injected dark matter signals are completely misattributed to point sources. Consequently, we conclude that dark matter may provide a dominant contribution to the GCE after all.
more | pdf | html
Figures
None.
Tweets
emulenews: Dark Matter Strikes Back at the Galactic Center. Unmodeled sources in the Fermi Bubbles can lead to a dark matter signal being misattributed to point sources. https://t.co/QGOEQj6vFF
threadreaderapp: @gomijacogeo Bonjour the unroll you asked for: Thread by @RKLeane: "New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/XKQFaIViYb We show that dark matter a [‚Ķ]" https://t.co/HB4L3LhV9Z Share this if you think it's interesting. ūü§Ė
RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihilation might explain the excess of gamma rays detected at the center of our galaxy, after all. Mega-thread explaining our results and backstory below!
BjoernPenning: Very interesting paper from @RKLeane and Tracy Slatyer 'Dark Matter Strikes Back at the Galactic Center' https://t.co/IvB0N8BmJd. I am sure @DanHooperAstro is excited. https://t.co/mWwNDI1dPq
Katelinsaurus: Ooh this is out! https://t.co/TxoLnrqLmz Congratulations @RKLeane!
scimichael: Dark Matter Strikes Back at the Galactic Center https://t.co/rGOJuAZyI5
HEPPhenoPapers: Dark Matter Strikes Back at the Galactic Center. https://t.co/5WTgXCusL0
emulenews: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
seanmcarroll: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
ajlopez: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
mikraemer: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
nyrath: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
partialobs: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
teaddicted: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
UFOL3TA: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
ElizabethUgalde: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
prezcannady: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
rcalsaverini: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
duxguitar: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
ChrisDMarshall: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
pantulis: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
suchi_kulkarni: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
DCHooper91: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
VanUgalde: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
alexanderchopan: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
jesseXjesse: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
juandesant: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
marianojavierd1: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
JostMigenda: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
BjoernPenning: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
Katelinsaurus: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
AsteronX: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
Raptorel: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
WHEbe60165: RT @qraal: [1904.08430] Dark Matter Strikes Back at the Galactic Center https://t.co/678jFsloqK
astrocolombian: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
debasishborah: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
QuantumMessage: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
thequarksoup: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
steuard: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
BeyondNerva: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
TimonEmken: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
innesbigaran: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
Roberto34513391: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
documentavi: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
bordercore: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
handydufresne: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
cjphy: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
saniaheba: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
millanvf: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
L_J_Big: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
nawusijia: RT @RKLeane: New paper out today! Dark Matter Strikes Back at the Galactic Center https://t.co/6DvXFfZGGT We show that dark matter annihi…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 20186
Unqiue Words: 3293

2.071 Mikeys
#2. Tex2Shape: Detailed Full Human Body Geometry from a Single Image
Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, Marcus Magnor
We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method.
more | pdf | html
Figures
Tweets
roadrunning01: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/4dh7ZWoZTS https://t.co/YOkokdFqvn
arxivml: "Tex2Shape: Detailed Full Human Body Geometry from a Single Image", Thiemo Alldieck, Gerard Pons-Moll, Christian Th… https://t.co/eBmsvmmBGi
arxiv_cscv: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/85qVOywXq8
arxiv_cscv: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/85qVOyOyhG
cunicode: RT @roadrunning01: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/4dh7ZWoZTS https://t.co/YOkokdFqvn
AndresMGarza: RT @roadrunning01: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/4dh7ZWoZTS https://t.co/YOkokdFqvn
KouroshMeshgi: RT @roadrunning01: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/4dh7ZWoZTS https://t.co/YOkokdFqvn
briandixn: RT @roadrunning01: Tex2Shape: Detailed Full Human Body Geometry from a Single Image https://t.co/4dh7ZWoZTS https://t.co/YOkokdFqvn
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 7176
Unqiue Words: 2152

2.07 Mikeys
#3. Effective Estimation of Deep Generative Language Models
Tom Pelsmaeker, Wilker Aziz
Advances in variational inference enable parameterisation of probabilistic models by deep neural networks. This combines the statistical transparency of the probabilistic modelling framework with the representational power of deep learning. Yet, it seems difficult to effectively estimate such models in the context of language modelling. Even models based on rather simple generative stories struggle to make use of additional structure due to a problem known as posterior collapse. We concentrate on one such model, namely, a variational auto-encoder, which we argue is an important building block in hierarchical probabilistic models of language. This paper contributes a sober view of the problem, a survey of techniques to address it, novel techniques, and extensions to the model. Our experiments on modelling written English text support a number of recommendations that should help researchers interested in this exciting field.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Effective Estimation of Deep Generative Language Models. Tom Pelsmaeker and Wilker Aziz https://t.co/URaLsNImrq
arxiv_in_review: #acl2019nlp Effective Estimation of Deep Generative Language Models. (arXiv:1904.08194v1 [cs\.CL]) https://t.co/7ikvdrRMI6
arxivml: "Effective Estimation of Deep Generative Language Models", Tom Pelsmaeker, Wilker Aziz https://t.co/jy2SBJq7gU
arxiv_cs_LG: Effective Estimation of Deep Generative Language Models. Tom Pelsmaeker and Wilker Aziz https://t.co/nvRPMez0pz
arxiv_cscl: Effective Estimation of Deep Generative Language Models https://t.co/mAWQiwb9Y8
arxiv_cscl: Effective Estimation of Deep Generative Language Models https://t.co/mAWQiwb9Y8
TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a thorough comparison of optimisation techniques for VAEs on a language modelling task. We show that VAEs with strong decoders are possible. With @wilkeraziz. https://t.co/AczhXQx6nB
ComputerPapers: Effective Estimation of Deep Generative Language Models. https://t.co/MoKh8j5u7D
enqush: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
EdinburghNLP: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
kastnerkyle: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
iatitov: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
vnfrombucharest: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
jmtomczak: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
unsorsodicorda: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
letranger14: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
kadarakos: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
chirghosh: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
mriosb08: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
rajpratim: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
cbaziotis: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
elacic1: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
_vaskon_: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
yang_zonghan: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
FumingGuo: RT @TomPelsmaeker: Happy to announce our new work: Effective Estimation of Deep Generative Language Models (https://t.co/HtYmNKRndG), a tho…
Github

Code accompanying the paper "Effective Estimation of Deep Generative Language Models".

Repository: deep-generative-lm
User: tom-pelsmaeker
Language: Python
Stargazers: 6
Subscribers: 2
Forks: 2
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 12371
Unqiue Words: 3798

2.069 Mikeys
#4. No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement
Jia Yan, Jie Li, Xin Fu
No-reference image quality assessment (NR-IQA) aims to measure the image quality without reference image. However, contrast distortion has been overlooked in the current research of NR-IQA. In this paper, we propose a very simple but effective metric for predicting quality of contrast-altered images based on the fact that a high-contrast image is often more similar to its contrast enhanced image. Specifically, we first generate an enhanced image through histogram equalization. We then calculate the similarity of the original image and the enhanced one by using structural-similarity index (SSIM) as the first feature. Further, we calculate the histogram based entropy and cross entropy between the original image and the enhanced one respectively, to gain a sum of 4 features. Finally, we learn a regression module to fuse the aforementioned 5 features for inferring the quality score. Experiments on four publicly available databases validate the superiority and efficiency of the proposed technique.
more | pdf | html
Figures
Tweets
arxivml: "No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement", Jia Yan, Jie Li, Xin Fu https://t.co/yma3E1Sa2G
arxiv_cscv: No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement https://t.co/NkY9NDOigi
arxiv_cscv: No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement https://t.co/NkY9NE5T7Q
Github

:art: Code for "No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement" by J. Yan, J. Li, X. Fu

Repository: CEIQ
User: mtobeiyf
Language: C
Stargazers: 7
Subscribers: 1
Forks: 2
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 5348
Unqiue Words: 1760

2.068 Mikeys
#5. Towards VQA Models that can Read
Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, Marcus Rohrbach
Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today's VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new "TextVQA" dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer. Second, we introduce a novel model architecture that reads text in the image, reasons about it in the context of the image and the question, and predicts an answer which might be a deduction based on the text and the image or composed of the strings found in the image. Consequently, we call our approach Look, Read, Reason & Answer (LoRRA). We show that LoRRA outperforms existing state-of-the-art VQA models on our TextVQA dataset. We find that the gap...
more | pdf | html
Figures
Tweets
arxivml: "Towards VQA Models that can Read", Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra… https://t.co/7WaSUWYBGH
arxiv_cs_LG: Towards VQA Models that can Read. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach https://t.co/EmhboLu54f
arxiv_cscv: Towards VQA Models that can Read https://t.co/0upkSi2jcK
arxiv_cscv: Towards VQA Models that can Read https://t.co/0upkShKHOa
arxiv_cscl: Towards VQA Models that can Read https://t.co/WRw4C8LzWb
arxiv_cscl: Towards VQA Models that can Read https://t.co/WRw4C93aNJ
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 8
Total Words: 9366
Unqiue Words: 2849

2.067 Mikeys
#6. Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Shawn Shan, Emily Willson, Bolun Wang, Bo Li, Haitao Zheng, Ben Y. Zhao
Deep neural networks are vulnerable to adversarial attacks. Numerous efforts have focused on defenses that either try to patch `holes' in trained models or try to make it difficult or costly to compute adversarial examples exploiting these holes. In our work, we explore a counter-intuitive approach of constructing "adversarial trapdoors. Unlike prior works that try to patch or disguise vulnerable points in the manifold, we intentionally inject `trapdoors,' artificial weaknesses in the manifold that attract optimized perturbation into certain pre-embedded local optima. As a result, the adversarial generation functions naturally gravitate towards our trapdoors, producing adversarial examples that the model owner can recognize through a known neuron activation signature. In this paper, we introduce trapdoors and describe an implementation of trapdoors using similar strategies to backdoor/Trojan attacks. We show that by proactively injecting trapdoors into the models (and extracting their neuron activation signature), we can detect...
more | pdf | html
Figures
Tweets
moyix: Cute idea ‚Äď kind of like chaff bugs, but for adversarial inputs in a DNN! https://t.co/ipuwA3Fv3N
arxivml: "Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks", Shawn Shan, Emil… https://t.co/5ScfK9npar
arxiv_cs_LG: Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks. Shawn Shan, Emily Willson, Bolun Wang, Bo Li, Haitao Zheng, and Ben Y. Zhao https://t.co/dHXMBF5L98
Memoirs: Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks. https://t.co/tSP2yNvzWU
polytomous: RT @moyix: Cute idea ‚Äď kind of like chaff bugs, but for adversarial inputs in a DNN! https://t.co/ipuwA3Fv3N
pwnslinger: RT @moyix: Cute idea ‚Äď kind of like chaff bugs, but for adversarial inputs in a DNN! https://t.co/ipuwA3Fv3N
ynadji: RT @moyix: Cute idea ‚Äď kind of like chaff bugs, but for adversarial inputs in a DNN! https://t.co/ipuwA3Fv3N
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 11418
Unqiue Words: 2754

2.067 Mikeys
#7. Knowledge-rich Image Gist Understanding Beyond Literal Meaning
Lydia Weiland, Ioana Hulpus, Simone Paolo Ponzetto, Wolfgang Effelsberg, Laura Dietz
We investigate the problem of understanding the message (gist) conveyed by images and their captions as found, for instance, on websites or news articles. To this end, we propose a methodology to capture the meaning of image-caption pairs on the basis of large amounts of machine-readable knowledge that has previously been shown to be highly effective for text understanding. Our method identifies the connotation of objects beyond their denotation: where most approaches to image understanding focus on the denotation of objects, i.e., their literal meaning, our work addresses the identification of connotations, i.e., iconic meanings of objects, to understand the message of images. We view image understanding as the task of representing an image-caption pair on the basis of a wide-coverage vocabulary of concepts such as the one provided by Wikipedia, and cast gist detection as a concept-ranking problem with image-caption pairs as queries. To enable a thorough investigation of the problem of gist understanding, we produce a gold...
more | pdf | html
Figures
None.
Tweets
arxivml: "Knowledge-rich Image Gist Understanding Beyond Literal Meaning", Lydia Weiland, Ioana Hulpus, Simone Paolo Ponzett… https://t.co/DHCftIcUF0
arxiv_cscv: Knowledge-rich Image Gist Understanding Beyond Literal Meaning https://t.co/26NinUQqdc
arxiv_cscv: Knowledge-rich Image Gist Understanding Beyond Literal Meaning https://t.co/26NinUQqdc
arxiv_cscv: Knowledge-rich Image Gist Understanding Beyond Literal Meaning https://t.co/26NinV814K
arxiv_cscl: Knowledge-rich Image Gist Understanding Beyond Literal Meaning https://t.co/p7nfGiR5Jf
arxiv_cscl: Knowledge-rich Image Gist Understanding Beyond Literal Meaning https://t.co/p7nfGj8GAN
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 5
Total Words: 0
Unqiue Words: 0

2.065 Mikeys
#8. Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset
Martin Mundt, Sagnik Majumder, Sreenivas Murali, Panagiotis Panetsos, Visvanathan Ramesh
Recognition of defects in concrete infrastructure, especially in bridges, is a costly and time consuming crucial first step in the assessment of the structural integrity. Large variation in appearance of the concrete material, changing illumination and weather conditions, a variety of possible surface markings as well as the possibility for different types of defects to overlap, make it a challenging real-world task. In this work we introduce the novel COncrete DEfect BRidge IMage dataset (CODEBRIM) for multi-target classification of five commonly appearing concrete defects. We investigate and compare two reinforcement learning based meta-learning approaches, MetaQNN and efficient neural architecture search, to find suitable convolutional neural network architectures for this challenging multi-class multi-target task. We show that learned architectures have fewer overall parameters in addition to yielding better multi-target accuracy in comparison to popular neural architectures from the literature evaluated in the context of our...
more | pdf | html
Figures
Tweets
mundt_martin: I'm happy to share our #cvpr2019 #deeplearning paper and dataset: "Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset" paper: https://t.co/WERnm4Nepf dataset: https://t.co/M9EoPCU6FP
mundt_martin: I'm happy to share our #cvpr2019 #deeplearning paper and dataset: "Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset" paper: https://t.co/WERnm4Nepf dataset: https://t.co/M9EoPCU6FP https://t.co/nQFgTh0pc7
StatsPapers: Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset. https://t.co/zC6nHfg5dV
arxivml: "Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete… https://t.co/DXxF5mog7Z
Github

Open-source code for our CVPR19 paper "Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset".

Repository: meta-learning-CODEBRIM
User: MrtnMndt
Language: None
Stargazers: 1
Subscribers: 1
Forks: 0
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 5
Total Words: 11119
Unqiue Words: 2981

2.065 Mikeys
#9. SPONGE: A generalized eigenproblem for clustering signed networks
Mihai Cucuringu, Peter Davies, Aldo Glielmo, Hemant Tyagi
We introduce a principled and theoretically sound spectral method for $k$-way clustering in signed graphs, where the affinity measure between nodes takes either positive or negative values. Our approach is motivated by social balance theory, where the task of clustering aims to decompose the network into disjoint groups, such that individuals within the same group are connected by as many positive edges as possible, while individuals from different groups are connected by as many negative edges as possible. Our algorithm relies on a generalized eigenproblem formulation inspired by recent work on constrained clustering. We provide theoretical guarantees for our approach in the setting of a signed stochastic block model, by leveraging tools from matrix perturbation theory and random matrix theory. An extensive set of numerical experiments on both synthetic and real data shows that our approach compares favorably with state-of-the-art methods for signed clustering, especially for large number of clusters and sparse measurement graphs.
more | pdf | html
Figures
Tweets
arxivml: "SPONGE: A generalized eigenproblem for clustering signed networks", Mihai Cucuringu, Peter Davies, Aldo Glielmo, H… https://t.co/aqg4IcpCrA
arxiv_cs_LG: SPONGE: A generalized eigenproblem for clustering signed networks. Mihai Cucuringu, Peter Davies, Aldo Glielmo, and Hemant Tyagi https://t.co/74onrcys8z
MathPaper: SPONGE: A generalized eigenproblem for clustering signed networks. https://t.co/a7O8m4z0yG
Github

A package for clustering of Signed Networks

Repository: SigNet
User: alan-turing-institute
Language: Python
Stargazers: 3
Subscribers: 5
Forks: 1
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 16941
Unqiue Words: 3294

2.064 Mikeys
#10. DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition
Toby Perrett, Dima Damen
Domain alignment in convolutional networks aims to learn the degree of layer-specific feature alignment beneficial to the joint learning of source and target datasets. While increasingly popular in convolutional networks, there have been no previous attempts to achieve domain alignment in recurrent networks. Similar to spatial features, both source and target domains are likely to exhibit temporal dependencies that can be jointly learnt and aligned. In this paper we introduce Dual-Domain LSTM (DDLSTM), an architecture that is able to learn temporal dependencies from two domains concurrently. It performs cross-contaminated batch normalisation on both input-to-hidden and hidden-to-hidden weights, and learns the parameters for cross-contamination, for both single-layer and multi-layer LSTM architectures. We evaluate DDLSTM on frame-level action recognition using three datasets, taking a pair at a time, and report an average increase in accuracy of 3.5%. The proposed DDLSTM architecture outperforms standard, fine-tuned, and...
more | pdf | html
Figures
Tweets
dimadamen: Dual-Domain LSTM - our @CVPR2019 paper now on Arxiv https://t.co/0CnH3zAujd Project: https://t.co/sQJOFIYATO Work with Toby Perrett @VILaboratory @UoB_Engineering offers first attempt to incorporate a differentiable dual domain (multi-dataset training) component within an RNN. https://t.co/PH9yHn7ZO4
arxivml: "DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition", Toby Perrett, Dima Damen https://t.co/EiLy0Lh2n2
arxiv_cscv: DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition https://t.co/QZcmeO93CX
arxiv_cscv: DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition https://t.co/QZcmeNRsen
udmrzn: RT @arxiv_cscv: DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition https://t.co/QZcmeO93CX
dimadamen: RT @arxiv_cscv: DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition https://t.co/QZcmeNRsen
chirghosh: RT @dimadamen: Dual-Domain LSTM - our @CVPR2019 paper now on Arxiv https://t.co/0CnH3zAujd Project: https://t.co/sQJOFIYATO Work with Toby…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 7063
Unqiue Words: 2092

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 113,782 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Categories
All
Astrophysics
Cosmology and Nongalactic Astrophysics
Earth and Planetary Astrophysics
Astrophysics of Galaxies
High Energy Astrophysical Phenomena
Instrumentation and Methods for Astrophysics
Solar and Stellar Astrophysics
Condensed Matter
Disordered Systems and Neural Networks
Mesoscale and Nanoscale Physics
Materials Science
Other Condensed Matter
Quantum Gases
Soft Condensed Matter
Statistical Mechanics
Strongly Correlated Electrons
Superconductivity
Computer Science
Artificial Intelligence
Hardware Architecture
Computational Complexity
Computational Engineering, Finance, and Science
Computational Geometry
Computation and Language
Cryptography and Security
Computer Vision and Pattern Recognition
Computers and Society
Databases
Distributed, Parallel, and Cluster Computing
Digital Libraries
Discrete Mathematics
Data Structures and Algorithms
Emerging Technologies
Formal Languages and Automata Theory
General Literature
Graphics
Computer Science and Game Theory
Human-Computer Interaction
Information Retrieval
Information Theory
Machine Learning
Logic in Computer Science
Multiagent Systems
Multimedia
Mathematical Software
Numerical Analysis
Neural and Evolutionary Computing
Networking and Internet Architecture
Other Computer Science
Operating Systems
Performance
Programming Languages
Robotics
Symbolic Computation
Sound
Software Engineering
Social and Information Networks
Systems and Control
Economics
Econometrics
General Economics
Theoretical Economics
Electrical Engineering and Systems Science
Audio and Speech Processing
Image and Video Processing
Signal Processing
General Relativity and Quantum Cosmology
General Relativity and Quantum Cosmology
High Energy Physics - Experiment
High Energy Physics - Experiment
High Energy Physics - Lattice
High Energy Physics - Lattice
High Energy Physics - Phenomenology
High Energy Physics - Phenomenology
High Energy Physics - Theory
High Energy Physics - Theory
Mathematics
Commutative Algebra
Algebraic Geometry
Analysis of PDEs
Algebraic Topology
Classical Analysis and ODEs
Combinatorics
Category Theory
Complex Variables
Differential Geometry
Dynamical Systems
Functional Analysis
General Mathematics
General Topology
Group Theory
Geometric Topology
History and Overview
Information Theory
K-Theory and Homology
Logic
Metric Geometry
Mathematical Physics
Numerical Analysis
Number Theory
Operator Algebras
Optimization and Control
Probability
Quantum Algebra
Rings and Algebras
Representation Theory
Symplectic Geometry
Spectral Theory
Statistics Theory
Mathematical Physics
Mathematical Physics
Nonlinear Sciences
Adaptation and Self-Organizing Systems
Chaotic Dynamics
Cellular Automata and Lattice Gases
Pattern Formation and Solitons
Exactly Solvable and Integrable Systems
Nuclear Experiment
Nuclear Experiment
Nuclear Theory
Nuclear Theory
Physics
Accelerator Physics
Atmospheric and Oceanic Physics
Applied Physics
Atomic and Molecular Clusters
Atomic Physics
Biological Physics
Chemical Physics
Classical Physics
Computational Physics
Data Analysis, Statistics and Probability
Physics Education
Fluid Dynamics
General Physics
Geophysics
History and Philosophy of Physics
Instrumentation and Detectors
Medical Physics
Optics
Plasma Physics
Popular Physics
Physics and Society
Space Physics
Quantitative Biology
Biomolecules
Cell Behavior
Genomics
Molecular Networks
Neurons and Cognition
Other Quantitative Biology
Populations and Evolution
Quantitative Methods
Subcellular Processes
Tissues and Organs
Quantitative Finance
Computational Finance
Economics
General Finance
Mathematical Finance
Portfolio Management
Pricing of Securities
Risk Management
Statistical Finance
Trading and Market Microstructure
Quantum Physics
Quantum Physics
Statistics
Applications
Computation
Methodology
Machine Learning
Other Statistics
Statistics Theory
Feedback
Online
Stats
Tracking 113,782 papers.