-
Notifications
You must be signed in to change notification settings - Fork 528
[MRG] Laplace regularized OTDA #140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
examples/plot_otda_laplacian.py
Outdated
# -*- coding: utf-8 -*- | ||
""" | ||
======================== | ||
OT for domain adaptation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add proper titke since it sill appear in the documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
References | ||
---------- | ||
|
||
.. [5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add the paper that provide the expression and gradients here and in the readme.
You should also update the readme at the line:
Optimal transport for domain adaptation with group lasso regularization [5]
to add Laplacian regularization and also add yourself as a contributor ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added Laplacian regularization to readme and workshop citation to da.py in both emd_laplace and EMDLaplace and to readme
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and myself to contributors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice PR,
I would add the option to select between position and displacement regularization.
ot/da.py
Outdated
+ (1 - alpha) * np.trace(np.dot(xs.T, np.dot(G, np.dot(lT, np.dot(G.T, xs))))) | ||
|
||
def df(G): | ||
return alpha * np.dot(lS + lS.T, np.dot(G, np.dot(xt, xt.T)))\ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pre compute (lS + lS.T) and np.dot(xt, xt.T) (lT + lT.T)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
lS = laplacian(sS) | ||
lT = laplacian(sT) | ||
|
||
def f(G): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please implement both position regularization and displacement regularization?
It only change the M matrix and provides more generality to teh function?
See equation (11) vs (9)
https://remi.flamary.com/biblio/flamary2014optlaplace.pdf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry one last review:
you cannot at the moment pass the parameters for the sim (nn otr rbparam) to the class which cleraly limits its application.
Please add this possibility (for instance by storing kwargs..)
ot/da.py
Outdated
in NIPS Workshop on Optimal Transport and Machine Learning OTML, 2014. | ||
""" | ||
|
||
def __init__(self, reg_type='pos', reg_lap=1., reg_src=1., alpha=0.5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how can you pass nn or sim to the emd_laplace function?
this is necessary, you should use a kwargs and store it for the fit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added dict kwargs in fit that is passed to emd_laplace
xs : np.ndarray (ns,d) | ||
samples in the source domain | ||
xt : np.ndarray (nt,d) | ||
samples in the target domain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sim not documented. You should add it to the list and define also the parameters for each sim : 'nn' and the gaussian kernel width.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
documented sim and added a common parameter 'similarity_param'
ot/da.py
Outdated
if sim == 'gauss': | ||
if 'rbfparam' not in kwargs: | ||
kwargs['rbfparam'] = 1 / (2 * (np.mean(dist(xs, xs, 'sqeuclidean')) ** 2)) | ||
sS = kernel(xs, xs, method=kwargs['sim'], sigma=kwargs['rbfparam']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do not use kwargs['sim'], use 'gauss'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left it in case one day another kernel wille be added to kernel but can fix it
Uh oh!
There was an error while loading. Please reload this page.