It seems that you used different criteria during training and testing as the code below shows:
IN TEST:
scores = np.dot(vecs.T, qvecs)
IN TRAIN:
dif = x1 - x2
D = torch.pow(dif+eps, 2).sum(dim=0).sqrt()
y = 0.5*lbl*torch.pow(D,2) + 0.5*(1-lbl)*torch.pow(torch.clamp(margin-D, min=0),2)
y = torch.sum(y)
A general framework for map-based visual localization. It contains 1) Map Generation which support traditional features or deeplearning features. 2) Hierarchical-Localizationvisual in visual(points or line) map. 3)Fusion framework with IMU, wheel odom and GPS sensors.
ACM Multimedia2020 University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization 🚁 annotates 1652 buildings in 72 universities around the world.
(ICML 2020) This repo contains code for our paper "Revisiting Training Strategies and Generalization Performance in Deep Metric Learning" (https://arxiv.org/abs/2002.08473) to facilitate consistent research in the field of Deep Metric Learning.
It seems that you used different criteria during training and testing as the code below shows:
IN TEST:
scores = np.dot(vecs.T, qvecs)
IN TRAIN:
dif = x1 - x2
D = torch.pow(dif+eps, 2).sum(dim=0).sqrt()
I did not get it why you do so?