Skip to content

Scoring functions #165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
bellet opened this issue Jan 31, 2019 · 0 comments · Fixed by #168
Closed

Scoring functions #165

bellet opened this issue Jan 31, 2019 · 0 comments · Fixed by #168
Milestone

Comments

@bellet
Copy link
Member

bellet commented Jan 31, 2019

Besides the default score method of our estimators, we should properly test and document the fact that one can use various metrics to evaluate our supervised and weakly-supervised metric learners when using sklearn's tools such as model_selection.GridSearchCV and model_selection.cross_val_score through their scoring parameter.

This should be done after dealing with #131, which will introduce a predict method for weakly supervised pairwise metric learners.

@bellet bellet added this to the v0.5.0 milestone Feb 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant