Visual Domain Adaptation with Manifold Embedded Distribution Alignment
Visual domain adaptation aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Existing methods either attempt to align the cross-domain distributions, or perform manifold subspace learning. However, there are two significant challenges: (1) degenerated feature transformation, which means that distribution alignment is often performed in the original feature space, where feature distortions are hard to overcome. On the other hand, subspace learning is not sufficient to reduce the distribution divergence. (2) unevaluated distribution alignment, which means that existing distribution alignment methods only align the marginal and conditional distributions with equal importance, while they fail to evaluate the different importance of these two distributions in real applications. In this paper, we propose a Manifold Embedded Distribution Alignment (MEDA) approach to address these challenges. MEDA learns a domain-invariant classifier in Grassmann manifold with structural risk minimization, while performing dynamic distribution alignment to quantitatively account for the relative importance of marginal and conditional distributions. To the best of our knowledge, MEDA is the first attempt to perform dynamic distribution alignment for manifold domain adaptation. Extensive experiments demonstrate that MEDA shows significant improvements in classification accuracy compared to state-of-the-art traditional and deep methods.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Jindong Wang (add twitter)
Wenjie Feng (add twitter)
Yiqiang Chen (add twitter)
Han Yu (add twitter)
Meiyu Huang (add twitter)
Philip S. Yu (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/19/18 10:15PM
8,293
2,645
Tweets
Santiag72427700: RT @arxiv_org: Visual Domain Adaptation with Manifold Embedded Distribution Alignment. https://t.co/L9PpQyIUBY https://t.co/ozTJreAMM3
Illarionov_msu: RT @arxiv_org: Visual Domain Adaptation with Manifold Embedded Distribution Alignment. https://t.co/L9PpQyIUBY https://t.co/ozTJreAMM3
arxiv_org: Visual Domain Adaptation with Manifold Embedded Distribution Alignment. https://t.co/L9PpQyIUBY https://t.co/ozTJreAMM3
Swall0wTech: [1807.07258] Visual Domain Adaptation with Manifold Embedded Distribution Alignment https://t.co/t8sDI0bENy
Images
Related