Top 10 Arxiv Papers Today


2.257 Mikeys
#1. FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces
Run Wang, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Jian Wang, Yang Liu
In recent years, we have witnessed the unprecedented success of generative adversarial networks (GANs) and its variants in image synthesis. These techniques are widely adopted in synthesizing fake faces which poses a serious challenge to existing face recognition (FR) systems and brings potential security threats to social networks and media as the fakes spread and fuel the misinformation. Unfortunately, robust detectors of these AI-synthesized fake faces are still in their infancy and are not ready to fully tackle this emerging challenge. Currently, image forensic-based and learning-based approaches are the two major categories of strategies in detecting fake faces. In this work, we propose an alternative category of approaches based on monitoring neuron behavior. The studies on neuron coverage and interactions have successfully shown that they can be served as testing criteria for deep learning systems, especially under the settings of being exposed to adversarial attacks. Here, we conjecture that monitoring neuron behavior can...
more | pdf | html
Figures
None.
Tweets
BrundageBot: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces. Run Wang, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Jian Wang, and Yang Liu https://t.co/ID9LvN9IV6
arxiv_cs_LG: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces. Run Wang, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Jian Wang, and Yang Liu https://t.co/CtOudYeVNk
EricSchles: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
GiorgioPatrini: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
miguelbandera: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
KouroshMeshgi: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
SingingData: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
subhobrata1: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
ajnovice: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
meverteam: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
caymanlee: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
Pol09122455: RT @roadrunning01: FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces pdf: https://t.co/nyrDqLtkR3 abs: https://t.co/yhM…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 0
Unqiue Words: 0

2.249 Mikeys
#2. Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov)
Adam Hibberd, Nikolaos Perakis, Andreas M. Hein
A potential second interstellar object C/2019 Q4 (Borisov) was discovered after the first known interstellar object 1I/'Oumuamua. Can we send a spacecraft to this object, using existing technologies? In this paper, we assess the technical feasibility of a mission to C/2019 Q4 (Borisov), using existing technologies. We apply the Optimum Interplanetary Trajectory Software (OITS) tool to generate trajectories to C/2019 Q4 (Borisov). As results, we get the minimal DeltaV trajectory with a launch date in July 2018. For this trajectory, a Falcon Heavy launcher could have hauled a 2 ton spacecraft to C/2019 Q4 (Borisov). For a later launch date, results for a combined powered Jupiter flyby with a Solar Oberth maneuver are presented. For a launch in 2030, we could reach C/2019 Q4 (Borisov) in 2045, using the Space Launch System, up-scaled Parker probe heatshield technology, and solid propulsion engines. A CubeSat-class spacecraft with a mass of 3 kg could be sent to C/2019 Q4 (Borisov). If C/2019 Q4 (Borisov) turns out to be indeed an...
more | pdf | html
Figures
None.
Tweets
localgroupjp: 2つ目の恒星間天体候補、C/2019 Q4(Borisov)に探査機を送ろうとした場合、2018年7月に打ち上げることができていたなら、ファルコンヘビーロケットを使って2トンの宇宙船を送り込めたとのこと https://t.co/aJ63tnNN4B
I4Interstellar: i4is Project Lyra team publishes - Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov) - https://t.co/DPrspx3K1e https://t.co/wm59ZRGGtr
Comet2013A1: Fresh off the arxiv #C2019Q4 #Borisov "Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov)" https://t.co/NcXlHPsCuV
StarshipBuilder: Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov) https://t.co/YrbQFtMaxj
scimichael: Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov) https://t.co/KylIVL05d7
I4Interstellar: RT @I4Interstellar: i4is Project Lyra team publishes - Sending a Spacecraft to Interstellar Comet C/2019 Q4 (Borisov) - https://t.co/DPrspx3K1e https://t.co/wm59ZRGGtr
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

2.195 Mikeys
#3. A Comparative Study on Transformer vs RNN in Speech Applications
Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, Shinji Watanabe, Takenori Yoshimura, Wangyou Zhang
Sequence-to-sequence models have been widely used in end-to-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-to-speech (TTS). This paper focuses on an emergent sequence-to-sequence model called Transformer, which achieves state-of-the-art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
more | pdf | html
Figures
None.
Tweets
BrundageBot: A Comparative Study on Transformer vs RNN in Speech Applications. Karita, Chen, Hayashi, Hori, Inaguma, Jiang, Someki, Soplin, Yamamoto, Wang, Watanabe, Yoshimura, and Zhang https://t.co/ZG2tuYIGSz
kari_tech: https://t.co/3QPtNiEk46 Our preprint "A Comparative Study on Transformer vs RNN in Speech Applications" (ASRU2019) is available! https://t.co/g6XFlxpYYB
ballforest: RT @kari_tech: https://t.co/3QPtNiEk46 Our preprint "A Comparative Study on Transformer vs RNN in Speech Applications" (ASRU2019) is availa…
kastnerkyle: RT @kari_tech: https://t.co/3QPtNiEk46 Our preprint "A Comparative Study on Transformer vs RNN in Speech Applications" (ASRU2019) is availa…
ymas0315: RT @kari_tech: https://t.co/3QPtNiEk46 Our preprint "A Comparative Study on Transformer vs RNN in Speech Applications" (ASRU2019) is availa…
r9y9: RT @kari_tech: https://t.co/3QPtNiEk46 Our preprint "A Comparative Study on Transformer vs RNN in Speech Applications" (ASRU2019) is availa…
chbalajitilak: RT @arxiv_cscl: A Comparative Study on Transformer vs RNN in Speech Applications https://t.co/KDiMBh0O1D
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 13
Total Words: 0
Unqiue Words: 0

2.174 Mikeys
#4. White-Box Adversarial Defense via Self-Supervised Data Estimation
Zudi Lin, Hanspeter Pfister, Ziming Zhang
In this paper, we study the problem of how to defend classifiers against adversarial attacks that fool the classifiers using subtly modified input data. In contrast to previous works, here we focus on the white-box adversarial defense where the attackers are granted full access to not only the classifiers but also defenders to produce as strong attacks as possible. In such a context we propose viewing a defender as a functional, a higher-order function that takes functions as its argument to represent a function space, rather than fixed functions conventionally. From this perspective, a defender should be realized and optimized individually for each adversarial input. To this end, we propose RIDE, an efficient and provably convergent self-supervised learning algorithm for individual data estimation to protect the predictions from adversarial attacks. We demonstrate the significant improvement of adversarial defense performance on image recognition, eg, 98%, 76%, 43% test accuracy on MNIST, CIFAR-10, and ImageNet datasets...
more | pdf | html
Figures
None.
Tweets
BrundageBot: White-Box Adversarial Defense via Self-Supervised Data Estimation. Zudi Lin, Hanspeter Pfister, and Ziming Zhang https://t.co/w0cY6YXt0h
arxivml: "White-Box Adversarial Defense via Self-Supervised Data Estimation", Zudi Lin, Hanspeter Pfister, Ziming Zhang https://t.co/YWbgzEnunU
arxiv_cs_LG: White-Box Adversarial Defense via Self-Supervised Data Estimation. Zudi Lin, Hanspeter Pfister, and Ziming Zhang https://t.co/Fr3spOyQWg
arxiv_cscv: White-Box Adversarial Defense via Self-Supervised Data Estimation https://t.co/sRy0ug3l3f
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

2.173 Mikeys
#5. Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering
Shiyue Zhang, Mohit Bansal
Text-based Question Generation (QG) aims at generating natural and relevant questions that can be answered by a given answer in some context. Existing QG models suffer from a "semantic drift" problem, i.e., the semantics of the model-generated question drifts away from the given context and answer. In this paper, we first propose two semantics-enhanced rewards obtained from downstream question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions. Second, since the traditional evaluation metrics (e.g., BLEU) often fall short in evaluating the quality of generated questions, we propose a QA-based evaluation method which measures the QG model's ability to mimic human annotators in generating QA training data. Experiments show that our method achieves the new state-of-the-art performance w.r.t. traditional metrics, and also performs best on our QA-based evaluation metrics. Further, we investigate how to use our QG model to augment QA datasets and enable semi-supervised QA. We...
more | pdf | html
Figures
None.
Tweets
BrundageBot: Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering. Shiyue Zhang and Mohit Bansal https://t.co/JAV8U4ybe5
arxivml: "Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering", Shiyue Zhang, Mohit Bans… https://t.co/XHCrUAdEIQ
arxiv_cs_LG: Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering. Shiyue Zhang and Mohit Bansal https://t.co/wVa2BULeLu
arxiv_cscl: Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering https://t.co/LQ3AvzwQWg
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 0
Unqiue Words: 0

2.144 Mikeys
#6. Five misconceptions about black holes
Jorge Pinochet
Given the great interest that black holes arouse among non-specialists, it is important to analyse misconceptions related to them. According to the author, the most common misconceptions are that: (1) black holes are formed from stellar collapse; (2) they are very massive; (3) they are very dense; (4) their gravity absorbs everything; and (5) they are black. The objective of this work is to analyse and correct these misconceptions. This article may be useful as pedagogical material in high school physics courses or in introductory courses in undergraduate physics.
more | pdf | html
Figures
None.
Tweets
emulenews: Five misconceptions about black holes, by Jorge Pinochet in Physics Education (2019) https://t.co/KAOOrlR1gM https://t.co/KMnb0JhxYL
LCTTA: Five misconceptions about black holes. (arXiv:1909.06006v1 [physics.pop-ph]) https://t.co/GMPdAvZWt2
DrRaulTorres: RT @emulenews: Five misconceptions about black holes, by Jorge Pinochet in Physics Education (2019) https://t.co/KAOOrlR1gM https://t.co/KM…
kur41: RT @emulenews: Five misconceptions about black holes, by Jorge Pinochet in Physics Education (2019) https://t.co/KAOOrlR1gM https://t.co/KM…
cuantizado: RT @emulenews: Five misconceptions about black holes, by Jorge Pinochet in Physics Education (2019) https://t.co/KAOOrlR1gM https://t.co/KM…
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 0
Unqiue Words: 0

2.13 Mikeys
#7. Reinforcement Learning: a Comparison of UCB Versus Alternative Adaptive Policies
Wesley Cowan, Michael N. Katehakis, Daniel Pirutinsky
In this paper we consider the basic version of Reinforcement Learning (RL) that involves computing optimal data driven (adaptive) policies for Markovian decision process with unknown transition probabilities. We provide a brief survey of the state of the art of the area and we compare the performance of the classic UCB policy of \cc{bkmdp97} with a new policy developed herein which we call MDP-Deterministic Minimum Empirical Divergence (MDP-DMED), and a method based on Posterior sampling (MDP-PS).
more | pdf | html
Figures
None.
Tweets
BrundageBot: Reinforcement Learning: a Comparison of UCB Versus Alternative Adaptive Policies. Wesley Cowan, Michael N. Katehakis, and Daniel Pirutinsky https://t.co/mSvyz7Xk4z
arxivml: "Reinforcement Learning: a Comparison of UCB Versus Alternative Adaptive Policies", Wesley Cowan, Michael N. Kateha… https://t.co/Qm05S3DRiT
arxiv_cs_LG: Reinforcement Learning: a Comparison of UCB Versus Alternative Adaptive Policies. Wesley Cowan, Michael N. Katehakis, and Daniel Pirutinsky https://t.co/56wEYBpP1x
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

2.13 Mikeys
#8. Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs
Jonas Kubilius, Martin Schrimpf, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Kailyn Schmidt, Aran Nayebi, Daniel Bear, Daniel L. K. Yamins, James J. DiCarlo
Deep convolutional artificial neural networks (ANNs) are the leading class of candidate models of the mechanisms of visual processing in the primate ventral stream. While initially inspired by brain anatomy, over the past years, these ANNs have evolved from a simple eight-layer architecture in AlexNet to extremely deep and branching architectures, demonstrating increasingly better object categorization performance, yet bringing into question how brain-like they still are. In particular, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. Here we demonstrate that better anatomical alignment to the brain and high performance on machine learning as well as neuroscience measures do not have to be in contradiction. We developed CORnet-S, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and...
more | pdf | html
Figures
None.
Tweets
BrundageBot: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs. Kubilius, Schrimpf, Hong, Majaj, Rajalingham, Issa, Kar, Bashivan, Prescott-Roy, Schmidt, Nayebi, Bear, Yamins, and DiCarlo https://t.co/NmuEKcb1WK
arxivml: "Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs", Jonas Kubilius, Martin Schrimpf, Ha Ho… https://t.co/ASnLLt2E3x
arxiv_cs_LG: Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs. Kubilius, Schrimpf, Hong, Majaj, Rajalingham, Issa, Kar, Bashivan, Prescott-Roy, Schmidt, Nayebi, Bear, Yamins, and DiCarlo https://t.co/WodOXHvfL1
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 14
Total Words: 0
Unqiue Words: 0

2.13 Mikeys
#9. Deep Adversarial Belief Networks
Yuming Huang, Ashkan Panahi, Hamid Krim, Yiyi Yu, Spencer L. Smith
We present a novel adversarial framework for training deep belief networks (DBNs), which includes replacing the generator network in the methodology of generative adversarial networks (GANs) with a DBN and developing a highly parallelizable numerical algorithm for training the resulting architecture in a stochastic manner. Unlike the existing techniques, this framework can be applied to the most general form of DBNs with no requirement for back propagation. As such, it lays a new foundation for developing DBNs on a par with GANs with various regularization units, such as pooling and normalization. Foregoing back-propagation, our framework also exhibits superior scalability as compared to other DBN and GAN learning techniques. We present a number of numerical experiments in computer vision as well as neurosciences to illustrate the main advantages of our approach.
more | pdf | html
Figures
None.
Tweets
BrundageBot: Deep Adversarial Belief Networks. Yuming Huang, Ashkan Panahi, Hamid Krim, Yiyi Yu, and Spencer L. Smith https://t.co/CJSgXdpYvj
arxivml: "Deep Adversarial Belief Networks", Yuming Huang, Ashkan Panahi, Hamid Krim, Yiyi Yu, Spencer L. Smith https://t.co/PL6FesaG55
arxiv_cs_LG: Deep Adversarial Belief Networks. Yuming Huang, Ashkan Panahi, Hamid Krim, Yiyi Yu, and Spencer L. Smith https://t.co/H0cMdWAStA
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 5
Total Words: 0
Unqiue Words: 0

2.13 Mikeys
#10. Deep Learned Path Planning via Randomized Reward-Linked-Goals and Potential Space Applications
Tamir Blum, William Jones, Kazuya Yoshida
Space exploration missions have seen use of increasingly sophisticated robotic systems with ever more autonomy. Deep learning promises to take this even a step further, and has applications for high-level tasks, like path planning, as well as low-level tasks, like motion control, which are critical components for mission efficiency and success. Using deep reinforcement end-to-end learning with randomized reward function parameters during training, we teach a simulated 8 degree-of-freedom quadruped ant-like robot to travel anywhere within a perimeter, conducting path plan and motion control on a single neural network, without any system model or prior knowledge of the terrain or environment. Our approach also allows for user specified waypoints, which could translate well to either fully autonomous or semi-autonomous/teleoperated space applications that encounter delay times. We trained the agent using randomly generated waypoints linked to the reward function and passed waypoint coordinates as inputs to the neural network....
more | pdf | html
Figures
None.
Tweets
BrundageBot: Deep Learned Path Planning via Randomized Reward-Linked-Goals and Potential Space Applications. Tamir Blum, William Jones, and Kazuya Yoshida https://t.co/Lb1qLiL7rX
arxivml: "Deep Learned Path Planning via Randomized Reward-Linked-Goals and Potential Space Applications", Tamir Blum, Willi… https://t.co/hThfd1vPJa
arxiv_cs_LG: Deep Learned Path Planning via Randomized Reward-Linked-Goals and Potential Space Applications. Tamir Blum, William Jones, and Kazuya Yoshida https://t.co/9hmTc1oOR9
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 0
Unqiue Words: 0

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 189,566 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Categories
All
Astrophysics
Cosmology and Nongalactic Astrophysics
Earth and Planetary Astrophysics
Astrophysics of Galaxies
High Energy Astrophysical Phenomena
Instrumentation and Methods for Astrophysics
Solar and Stellar Astrophysics
Condensed Matter
Disordered Systems and Neural Networks
Mesoscale and Nanoscale Physics
Materials Science
Other Condensed Matter
Quantum Gases
Soft Condensed Matter
Statistical Mechanics
Strongly Correlated Electrons
Superconductivity
Computer Science
Artificial Intelligence
Hardware Architecture
Computational Complexity
Computational Engineering, Finance, and Science
Computational Geometry
Computation and Language
Cryptography and Security
Computer Vision and Pattern Recognition
Computers and Society
Databases
Distributed, Parallel, and Cluster Computing
Digital Libraries
Discrete Mathematics
Data Structures and Algorithms
Emerging Technologies
Formal Languages and Automata Theory
General Literature
Graphics
Computer Science and Game Theory
Human-Computer Interaction
Information Retrieval
Information Theory
Machine Learning
Logic in Computer Science
Multiagent Systems
Multimedia
Mathematical Software
Numerical Analysis
Neural and Evolutionary Computing
Networking and Internet Architecture
Other Computer Science
Operating Systems
Performance
Programming Languages
Robotics
Symbolic Computation
Sound
Software Engineering
Social and Information Networks
Systems and Control
Economics
Econometrics
General Economics
Theoretical Economics
Electrical Engineering and Systems Science
Audio and Speech Processing
Image and Video Processing
Signal Processing
General Relativity and Quantum Cosmology
General Relativity and Quantum Cosmology
High Energy Physics - Experiment
High Energy Physics - Experiment
High Energy Physics - Lattice
High Energy Physics - Lattice
High Energy Physics - Phenomenology
High Energy Physics - Phenomenology
High Energy Physics - Theory
High Energy Physics - Theory
Mathematics
Commutative Algebra
Algebraic Geometry
Analysis of PDEs
Algebraic Topology
Classical Analysis and ODEs
Combinatorics
Category Theory
Complex Variables
Differential Geometry
Dynamical Systems
Functional Analysis
General Mathematics
General Topology
Group Theory
Geometric Topology
History and Overview
Information Theory
K-Theory and Homology
Logic
Metric Geometry
Mathematical Physics
Numerical Analysis
Number Theory
Operator Algebras
Optimization and Control
Probability
Quantum Algebra
Rings and Algebras
Representation Theory
Symplectic Geometry
Spectral Theory
Statistics Theory
Mathematical Physics
Mathematical Physics
Nonlinear Sciences
Adaptation and Self-Organizing Systems
Chaotic Dynamics
Cellular Automata and Lattice Gases
Pattern Formation and Solitons
Exactly Solvable and Integrable Systems
Nuclear Experiment
Nuclear Experiment
Nuclear Theory
Nuclear Theory
Physics
Accelerator Physics
Atmospheric and Oceanic Physics
Applied Physics
Atomic and Molecular Clusters
Atomic Physics
Biological Physics
Chemical Physics
Classical Physics
Computational Physics
Data Analysis, Statistics and Probability
Physics Education
Fluid Dynamics
General Physics
Geophysics
History and Philosophy of Physics
Instrumentation and Detectors
Medical Physics
Optics
Plasma Physics
Popular Physics
Physics and Society
Space Physics
Quantitative Biology
Biomolecules
Cell Behavior
Genomics
Molecular Networks
Neurons and Cognition
Other Quantitative Biology
Populations and Evolution
Quantitative Methods
Subcellular Processes
Tissues and Organs
Quantitative Finance
Computational Finance
Economics
General Finance
Mathematical Finance
Portfolio Management
Pricing of Securities
Risk Management
Statistical Finance
Trading and Market Microstructure
Quantum Physics
Quantum Physics
Statistics
Applications
Computation
Methodology
Machine Learning
Other Statistics
Statistics Theory
Feedback
Online
Stats
Tracking 189,566 papers.