Top 10 Arxiv Papers Today


2.099 Mikeys
#1. The compositionality of neural networks: integrating symbolism and connectionism
Dieuwke Hupkes, Verna Dankers, Mathijs Mul, Elia Bruni
Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during...
more | pdf | html
Figures
None.
Tweets
BrundageBot: The compositionality of neural networks: integrating symbolism and connectionism. Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni https://t.co/J7B77ypydw
IntuitMachine: A proposed test to see if connectionist architectures (i.e. deep learning) are capable of the composition found in symbolic systems. https://t.co/N947NoU0VP #ai
omarsar0: This study uses several methods to test the compositionality of neural networks in particular as it relates to language modeling. Models tested: recurrent, convolution, and the recently popular transformer. The notes on linguistic theory are nice! https://t.co/gmySHE8Yyc https://t.co/YGkwpU5VTS
arxivml: "The compositionality of neural networks: integrating symbolism and connectionism", Dieuwke Hupkes, Verna Dankers, … https://t.co/It3Q62RS4t
_dieuwke_: Why? What else? And what does this have to do with compositionality in natural language? For more results, motivation and elaborate discussion, have a look: https://t.co/AqyUZUGHNg!
_dieuwke_: Curious what people may mean when they say a neural network is (not) compositional? And how that relates to linguistics and philosophy literature on compositionality? Check our new paper on compositionality in neural networks: https://t.co/AqyUZUGHNg! https://t.co/Z9bj1FYKiS
_dieuwke_: Why? What else? And what does this have to do with compositionality in natural language? For more results, motivation and elaborate discussion, have a look: https://t.co/AqyUZUGHNg!
SciFi: The compositionality of neural networks: integrating symbolism and connectionism. https://t.co/DlFVf4cGMz
arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCD4y2
arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCD4y2
arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCUFpA
arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCD4y2
arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCD4y2
stjaco: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/IPR96bXeyc
KnXChg: RT @arxiv_cscl: The compositionality of neural networks: integrating symbolism and connectionism https://t.co/K0vPlCD4y2
Github
Repository: am-i-compositional
User: i-machine-think
Language: Python
Stargazers: 0
Subscribers: 3
Forks: 0
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 21534
Unqiue Words: 4245

2.078 Mikeys
#2. Automated quantum programming via reinforcement learning for combinatorial optimization
Keri A. McKiernan, Erik Davis, M. Sohaib Alam, Chad Rigetti
We develop a general method for incentive-based programming of hybrid quantum-classical computing systems using reinforcement learning, and apply this to solve combinatorial optimization problems on both simulated and real gate-based quantum computers. Relative to a set of randomly generated problem instances, agents trained through reinforcement learning techniques are capable of producing short quantum programs which generate high quality solutions on both types of quantum resources. We observe generalization to problems outside of the training set, as well as generalization from the simulated quantum resource to the physical quantum resource.
more | pdf | html
Figures
None.
Tweets
arxiv_org: Automated quantum programming via reinforcement learning for combinatorial optimization. https://t.co/oRFlu6QJJw https://t.co/SwmtWqNqtq
kenjikun__: https://t.co/3CTGZMeGcm 強化学習を使って組合せ最適化問題のより良い解を出すような浅い量子回路を生成する方法を提案した. 量子回路を実行する環境としてはシミュレーションと実機を用いた. エージェントとしてはPPO (Proximal Policy Optimization)を用いた QAOAより短い回路がえられた.
arxivml: "Automated quantum programming via reinforcement learning for combinatorial optimization", Keri A. McKiernan, Erik … https://t.co/3jcEn5sMwj
k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization with @braised_babbage, @sohaib_alam, @ChadRigetti code, datasets, models: https://t.co/I73YtZIs7F #QuantumComputing #ReinforcementLearning #optimization
arxiv_cs_LG: Automated quantum programming via reinforcement learning for combinatorial optimization. Keri A. McKiernan, Erik Davis, M. Sohaib Alam, and Chad Rigetti https://t.co/SECujs1PrZ
Memoirs: Automated quantum programming via reinforcement learning for combinatorial optimization. https://t.co/4sov0F4REi
trisetyarso: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
HubBucket: RT @arxiv_org: Automated quantum programming via reinforcement learning for combinatorial optimization. https://t.co/oRFlu6QJJw https://t.c…
matt_reagor: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
stuart_hadfield: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
ChadRigetti: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
snuffkin: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
nalidoust: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
blake_johnson: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
nicolasochem: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
notmgsk: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
fifcsml: RT @arxiv_org: Automated quantum programming via reinforcement learning for combinatorial optimization. https://t.co/oRFlu6QJJw https://t.c…
sohaib_alam: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
braised_babbage: RT @k_mckiern: New paper! https://t.co/p4QZMpHi3p Automated quantum programming via reinforcement learning for combinatorial optimization…
JawaeChan: RT @arxiv_org: Automated quantum programming via reinforcement learning for combinatorial optimization. https://t.co/oRFlu6QJJw https://t.c…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 0
Unqiue Words: 0

2.068 Mikeys
#3. VL-BERT: Pre-training of Generic Visual-Linguistic Representations
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai
We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the vision-and-language downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on massive-scale Conceptual Captions dataset with three tasks: masked language modeling with visual clues, masked RoI classification with linguistic clues, and sentence-image relationship prediction. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual question answering, visual commonsense reasoning and referring expression comprehension. It is worth noting...
more | pdf | html
Figures
None.
Tweets
BrundageBot: VL-BERT: Pre-training of Generic Visual-Linguistic Representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai https://t.co/eeqrCgbZze
arxiv_in_review: #ICLR2020 VL-BERT: Pre-training of Generic Visual-Linguistic Representations. (arXiv:1908.08530v1 [cs\.CV]) https://t.co/lfDcjcuN0B
arxivml: "VL-BERT: Pre-training of Generic Visual-Linguistic Representations", Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei… https://t.co/2PSRBL6rsn
Memoirs: VL-BERT: Pre-training of Generic Visual-Linguistic Representations. https://t.co/WDh5Mgf5Jx
arxiv_cscv: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/AKQoR6pinr
arxiv_cscv: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/AKQoR6pinr
arxiv_cscv: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/AKQoR67GYR
arxiv_cscl: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/YjcG15hrIq
arxiv_cscl: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/YjcG15hrIq
arxiv_cscl: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/YjcG15hrIq
arxiv_cscl: VL-BERT: Pre-training of Generic Visual-Linguistic Representations https://t.co/YjcG15z2zY
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 7
Total Words: 0
Unqiue Words: 0

2.061 Mikeys
#4. More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation
Yunguan Fu, Maria R. Robu, Bongjin Koo, Crispin Schneider, Stijn van Laarhoven, Danail Stoyanov, Brian Davidson, Matthew J. Clarkson, Yipeng Hu
Improving a semi-supervised image segmentation task has the option of adding more unlabelled images, labelling the unlabelled images or combining both, as neither image acquisition nor expert labelling can be considered trivial in most clinical applications. With a laparoscopic liver image segmentation application, we investigate the performance impact by altering the quantities of labelled and unlabelled training data, using a semi-supervised segmentation algorithm based on the mean teacher learning paradigm. We first report a significantly higher segmentation accuracy, compared with supervised learning. Interestingly, this comparison reveals that the training strategy adopted in the semi-supervised algorithm is also responsible for this observed improvement, in addition to the added unlabelled data. We then compare different combinations of labelled and unlabelled data set sizes for training semi-supervised segmentation networks, to provide a quantitative example of the practically useful trade-off between the two data planning...
more | pdf | html
Figures
None.
Tweets
arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https://t.co/mVvz8lX6zm
arxivml: "More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation", Yunguan Fu, … https://t.co/KaiiV79I1g
mathpluscode: Proud of my latest research on laparoscopic image segmentation. We showed that independent of an increase in unlabelled data, the semi-supervised training strategy improves the segmentation accuracy. Accepted for the @miccai2019 MIL3ID workshop, https://t.co/zoCvsJIqTt. https://t.co/VPGHnvHR4E
arxiv_cs_LG: More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation. Yunguan Fu, Maria R. Robu, Bongjin Koo, Crispin Schneider, Stijn van Laarhoven, Danail Stoyanov, Brian Davidson, Matthew J. Clarkson, and Yipeng Hu https://t.co/YIfBNx9nB2
StatsPapers: More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation. https://t.co/ItH0NVRNI0
arxiv_cscv: More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation https://t.co/kdbuVlX7qX
Rosenchild: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
Rosenchild: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
HubBucket: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
jaialkdanel: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
udmrzn: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
JawaeChan: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
junsukchoe: RT @arxiv_org: More unlabelled data or label more data? A study on semi-supervised laparoscopic image se... https://t.co/92rdqAHP2W https:/…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 9
Total Words: 0
Unqiue Words: 0

2.06 Mikeys
#5. Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning
Jyoti Aneja, Harsh Agrawal, Dhruv Batra, Alexander Schwing
Diverse and accurate vision+language modeling is an important goal to retain creative freedom and maintain user engagement. However, adequately capturing the intricacies of diversity in language models is challenging. Recent works commonly resort to latent variable models augmented with more or less supervision from object detectors or part-of-speech tags. Common to all those methods is the fact that the latent variable either only initializes the sentence generation process or is identical across the steps of generation. Both methods offer no fine-grained control. To address this concern, we propose Seq-CVAE which learns a latent space for every word position. We encourage this temporal latent space to capture the 'intention' about how to complete the sentence by mimicking a representation which summarizes the future. We illustrate the efficacy of the proposed approach to anticipate the sentence continuation on the challenging MSCOCO dataset, significantly improving diversity metrics compared to baselines while performing on par...
more | pdf | html
Figures
None.
Tweets
BrundageBot: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning. Jyoti Aneja, Harsh Agrawal, Dhruv Batra, and Alexander Schwing https://t.co/OiveCQyKXK
arxivml: "Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning", Jyoti Aneja, Harsh Agrawal, … https://t.co/2y1MO0bGqf
StatsPapers: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning. https://t.co/POo18z8uVP
arxiv_cscv: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/aIbW4Syhw5
arxiv_cscv: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/aIbW4Syhw5
arxiv_cscv: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/aIbW4SPSnD
arxiv_cscl: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/qtC2wMy9aU
arxiv_cscl: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/qtC2wMy9aU
arxiv_cscl: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/qtC2wMy9aU
arxiv_cscl: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning https://t.co/qtC2wMPK2s
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 0
Unqiue Words: 0

2.06 Mikeys
#6. ViCo: Word Embeddings from Visual Co-occurrences
Tanmay Gupta, Alexander Schwing, Derek Hoiem
We propose to learn word embeddings from visual co-occurrences. Two words co-occur visually if both words apply to the same image or image region. Specifically, we extract four types of visual co-occurrences between object and attribute words from large-scale, textually-annotated visual databases like VisualGenome and ImageNet. We then train a multi-task log-bilinear model that compactly encodes word "meanings" represented by each co-occurrence type into a single visual word-vector. Through unsupervised clustering, supervised partitioning, and a zero-shot-like generalization analysis we show that our word embeddings complement text-only embeddings like GloVe by better representing similarities and differences between visual concepts that are difficult to obtain from text corpora alone. We further evaluate our embeddings on five downstream applications, four of which are vision-language tasks. Augmenting GloVe with our embeddings yields gains on all tasks. We also find that random embeddings perform comparably to learned embeddings...
more | pdf | html
Figures
None.
Tweets
BrundageBot: ViCo: Word Embeddings from Visual Co-occurrences. Tanmay Gupta, Alexander Schwing, and Derek Hoiem https://t.co/RFVPCPQQEy
arxivml: "ViCo: Word Embeddings from Visual Co-occurrences", Tanmay Gupta, Alexander Schwing, Derek Hoiem https://t.co/ZTor7QoRhE
tanmay2099: Need a break from BERTmania? Checkout ViCo -- multi-sense word embeddings from visual (as opposed to textual) co-occurrences. Work done in collaboration with @HoiemDerek and @alexschwing at @IllinoisCS. To be presented at ICCV 2019! https://t.co/ieT0BHTXbk
arxiv_cscv: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/iY04LVqvJx
arxiv_cscv: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/iY04LVqvJx
arxiv_cscv: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/iY04LVI6B5
arxiv_cscl: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/4h4xR1CRTn
arxiv_cscl: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/4h4xR1CRTn
arxiv_cscl: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/4h4xR1UsKV
arxiv_cscl: ViCo: Word Embeddings from Visual Co-occurrences https://t.co/4h4xR1CRTn
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

2.057 Mikeys
#7. Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks
Dominique Beaini, Sofiane Achiche, Alexandre Duperré, Maxime Raison
Current saliency methods require to learn large scale regional features using small convolutional kernels, which is not possible with a simple feed-forward network. Some methods solve this problem by using segmentation into superpixels while others downscale the image through the network and rescale it back to its original size. The objective of this paper is to show that saliency convolutional neural networks (CNN) can be improved by using a Green's function convolution (GFC) to extrapolate edges features into salient regions. The GFC acts as a gradient integrator, allowing to produce saliency features from thin edge-like features directly inside the CNN. Hence, we propose the gradient integration and sum (GIS) layer that combines the edges features with the saliency features. Using the HED and DSS architecture, we demonstrated that adding a GIS layer near the network's output allows to reduce the sensitivity to the parameter initialization and overfitting, thus improving the repeatability of the training. By adding a GIS layer...
more | pdf | html
Figures
Tweets
BrundageBot: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks. Dominique Beaini, Sofiane Achiche, Alexandre Duperré, and Maxime Raison https://t.co/IctwizHzKt
arxivml: "Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks", Dominique Beaini, Sofian… https://t.co/wymUZvQlxO
Memoirs: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks. https://t.co/hUyiPOdMGz
arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XUqzN
arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XUqzN
arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XUqzN
arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XCPbd
arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XCPbd
dirackuma: @momiji_fullmoon https://t.co/wh54b4raGw 文字化け…
disigandalf: RT @arxiv_cscv: Deep Green Function Convolution for Improving Saliency in Convolutional Neural Networks https://t.co/RPwL6XUqzN
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 7846
Unqiue Words: 2066

2.057 Mikeys
#8. Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation
Alfio Gliozzo, Michael R. Glass, Sarthak Dash, Mustafa Canim
In this paper, we propose a fully automated system to extend knowledge graphs using external information from web-scale corpora. The designed system leverages a deep learning based technology for relation extraction that can be trained by a distantly supervised approach. In addition to that, the system uses a deep learning approach for knowledge base completion by utilizing the global structure information of the induced KG to further refine the confidence of the newly discovered relations. The designed system does not require any effort for adaptation to new languages and domains as it does not use any hand-labeled data, NLP analytics and inference rules. Our experiments, performed on a popular academic benchmark demonstrate that the suggested system boosts the performance of relation extraction by a wide margin, reporting error reductions of 50%, resulting in relative improvement of up to 100%. Also, a web-scale experiment conducted to extend DBPedia with knowledge from Common Crawl shows that our system is not only scalable but...
more | pdf | html
Figures
Tweets
arxiv_org: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and... https://t.co/DgCt5UR7vo https://t.co/jL19zQCEIl
BrundageBot: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation. Alfio Gliozzo, Michael R. Glass, Sarthak Dash, and Mustafa Canim https://t.co/2zukydGiLf
arxivml: "Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation", Alfio Gliozz… https://t.co/DsD6m7K2nQ
arxiv_cscl: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation https://t.co/OBOkZsIKXH
arxiv_cscl: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation https://t.co/OBOkZsIKXH
arxiv_cscl: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation https://t.co/OBOkZt0lPf
arxiv_cscl: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation https://t.co/OBOkZsIKXH
HubBucket: RT @arxiv_org: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and... https://t.co/DgCt5UR7vo https://…
RexDouglass: RT @arxiv_org: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and... https://t.co/DgCt5UR7vo https://…
RexDouglass: RT @arxiv_cscl: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation https://t.co/OBOkZt0lPf
fifcsml: RT @arxiv_org: Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and... https://t.co/DgCt5UR7vo https://…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 6197
Unqiue Words: 1952

2.054 Mikeys
#9. Deep Reinforcement Learning for Foreign Exchange Trading
Chun-Chieh Wang, Yun-Cheng Tsai
Reinforcement learning can interact with the environment and is suitable for applications in decision control systems. Therefore, we used the reinforcement learning method to establish a foreign exchange transaction, avoiding the long-standing problem of unstable trends in deep learning predictions. In the system design, we optimized the Sure-Fire statistical arbitrage policy, set three different actions, encoded the continuous price over a period of time into a heat-map view of the Gramian Angular Field (GAF) and compared the Deep Q Learning (DQN) and Proximal Policy Optimization (PPO) algorithms. To test feasibility, we analyzed three currency pairs, namely EUR/USD, GBP/USD, and AUD/USD. We trained the data in units of four hours from 1 August 2018 to 30 November 2018 and tested model performance using data between 1 December 2018 and 31 December 2018. The test results of the various models indicated that favorable investment performance was achieved as long as the model was able to handle complex and random processes and the...
more | pdf | html
Figures
None.
Tweets
arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
peisuke: ほう。 https://t.co/lGrAP38qzz
arxivml: "Deep Reinforcement Learning for Foreign Exchange Trading", Chun-Chieh Wang, Yun-Cheng Tsai https://t.co/GpbOUTr3m4
arxiv_cs_LG: Deep Reinforcement Learning for Foreign Exchange Trading. Chun-Chieh Wang and Yun-Cheng Tsai https://t.co/VAKGQqgsmT
SciFi: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/gcftM8TJVV
udmrzn: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
muktabh: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
fullNam35087976: RT @arxivml: "Deep Reinforcement Learning for Foreign Exchange Trading", Chun-Chieh Wang, Yun-Cheng Tsai https://t.co/GpbOUTr3m4
puneethmishra: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
jajaldo: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
dannyehb: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
ml_unam: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
fifcsml: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
HomoSapienLCY: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
MozejkoMarcin: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
Sam09lol: RT @arxiv_org: Deep Reinforcement Learning for Foreign Exchange Trading. https://t.co/erZz0R9w47 https://t.co/DIhiLSWrLP
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 0
Unqiue Words: 0

2.053 Mikeys
#10. Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning
Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei Zhang, Huajun Chen
Relation extraction aims to extract relational facts from sentences. Previous models mainly rely on manually labeled datasets, seed instances or human-crafted patterns, and distant supervision. However, the human annotation is expensive, while human-crafted patterns suffer from semantic drift and distant supervision samples are usually noisy. Domain adaptation methods enable leveraging labeled data from a different but related domain. However, different domains usually have various textual relation descriptions and different label space (the source label space is usually a superset of the target label space). To solve these problems, we propose a novel model of relation-gated adversarial learning for relation extraction, which extends the adversarial based domain adaptation. Experimental results have shown that the proposed approach outperforms previous domain adaptation methods regarding partial domain adaptation and can improve the accuracy of distance supervised relation extraction through fine-tuning.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei Zhang, and Huajun Chen https://t.co/hG3KO1Dx1v
arxiv_in_review: #AAAI2020 Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning. (arXiv:1908.08507v1 [cs\.LG]) https://t.co/3QeQGg7RlQ
arxivml: "Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning", Ningyu Zhang, Shumin Deng, Zha… https://t.co/mkaqr7tNTA
StatsPapers: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning. https://t.co/q6rxjvxx4J
arxiv_cscl: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning https://t.co/fZUQfGLypG
arxiv_cscl: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning https://t.co/fZUQfGLypG
arxiv_cscl: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning https://t.co/fZUQfGtX16
arxiv_cscl: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning https://t.co/fZUQfGLypG
RexDouglass: RT @arxiv_cscl: Transfer Learning for Relation Extraction via Relation-Gated Adversarial Learning https://t.co/fZUQfGtX16
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 0
Unqiue Words: 0

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 177,899 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Categories
All
Astrophysics
Cosmology and Nongalactic Astrophysics
Earth and Planetary Astrophysics
Astrophysics of Galaxies
High Energy Astrophysical Phenomena
Instrumentation and Methods for Astrophysics
Solar and Stellar Astrophysics
Condensed Matter
Disordered Systems and Neural Networks
Mesoscale and Nanoscale Physics
Materials Science
Other Condensed Matter
Quantum Gases
Soft Condensed Matter
Statistical Mechanics
Strongly Correlated Electrons
Superconductivity
Computer Science
Artificial Intelligence
Hardware Architecture
Computational Complexity
Computational Engineering, Finance, and Science
Computational Geometry
Computation and Language
Cryptography and Security
Computer Vision and Pattern Recognition
Computers and Society
Databases
Distributed, Parallel, and Cluster Computing
Digital Libraries
Discrete Mathematics
Data Structures and Algorithms
Emerging Technologies
Formal Languages and Automata Theory
General Literature
Graphics
Computer Science and Game Theory
Human-Computer Interaction
Information Retrieval
Information Theory
Machine Learning
Logic in Computer Science
Multiagent Systems
Multimedia
Mathematical Software
Numerical Analysis
Neural and Evolutionary Computing
Networking and Internet Architecture
Other Computer Science
Operating Systems
Performance
Programming Languages
Robotics
Symbolic Computation
Sound
Software Engineering
Social and Information Networks
Systems and Control
Economics
Econometrics
General Economics
Theoretical Economics
Electrical Engineering and Systems Science
Audio and Speech Processing
Image and Video Processing
Signal Processing
General Relativity and Quantum Cosmology
General Relativity and Quantum Cosmology
High Energy Physics - Experiment
High Energy Physics - Experiment
High Energy Physics - Lattice
High Energy Physics - Lattice
High Energy Physics - Phenomenology
High Energy Physics - Phenomenology
High Energy Physics - Theory
High Energy Physics - Theory
Mathematics
Commutative Algebra
Algebraic Geometry
Analysis of PDEs
Algebraic Topology
Classical Analysis and ODEs
Combinatorics
Category Theory
Complex Variables
Differential Geometry
Dynamical Systems
Functional Analysis
General Mathematics
General Topology
Group Theory
Geometric Topology
History and Overview
Information Theory
K-Theory and Homology
Logic
Metric Geometry
Mathematical Physics
Numerical Analysis
Number Theory
Operator Algebras
Optimization and Control
Probability
Quantum Algebra
Rings and Algebras
Representation Theory
Symplectic Geometry
Spectral Theory
Statistics Theory
Mathematical Physics
Mathematical Physics
Nonlinear Sciences
Adaptation and Self-Organizing Systems
Chaotic Dynamics
Cellular Automata and Lattice Gases
Pattern Formation and Solitons
Exactly Solvable and Integrable Systems
Nuclear Experiment
Nuclear Experiment
Nuclear Theory
Nuclear Theory
Physics
Accelerator Physics
Atmospheric and Oceanic Physics
Applied Physics
Atomic and Molecular Clusters
Atomic Physics
Biological Physics
Chemical Physics
Classical Physics
Computational Physics
Data Analysis, Statistics and Probability
Physics Education
Fluid Dynamics
General Physics
Geophysics
History and Philosophy of Physics
Instrumentation and Detectors
Medical Physics
Optics
Plasma Physics
Popular Physics
Physics and Society
Space Physics
Quantitative Biology
Biomolecules
Cell Behavior
Genomics
Molecular Networks
Neurons and Cognition
Other Quantitative Biology
Populations and Evolution
Quantitative Methods
Subcellular Processes
Tissues and Organs
Quantitative Finance
Computational Finance
Economics
General Finance
Mathematical Finance
Portfolio Management
Pricing of Securities
Risk Management
Statistical Finance
Trading and Market Microstructure
Quantum Physics
Quantum Physics
Statistics
Applications
Computation
Methodology
Machine Learning
Other Statistics
Statistics Theory
Feedback
Online
Stats
Tracking 177,899 papers.