织梦CMS - 轻松建站从此开始!

罗索

SIFT特征点匹配与消除错配:BBF,RANSAC

jackyhwei 发布于 2011-08-15 17:00 点击:次 
Step1:BBF算法,在KD-tree上找KNN。第一步做匹配咯~
TAG:

Step1:BBF算法,在KD-tree上找KNN。第一步做匹配咯~
 
1.       什么是KD-treefrom wiki
K-Dimension tree,实际上是一棵平衡二叉树。
一般的KD-tree构造过程:
function kdtree (list of points pointList, int depth)
{
    if pointList is empty
        return nil;
    else {
        // Select axis based on depth so that axis cycles through all valid values
        var int axis := depth mod k;
 
        // Sort point list and choose median as pivot element
        select median by axis from pointList;
 
        // Create node and construct subtrees
        var tree_node node;
        node.location := median;
        node.leftChild := kdtree(points in pointList before median, depth+1);
        node.rightChild := kdtree(points in pointList after median, depth+1);
        return node;
    }
}
 
【例】pointList = [(2,3), (5,4), (9,6), (4,7), (8,1), (7,2)] tree = kdtree(pointList)
 
2D 3D KD-tree
 
2.       BBF算法,在KD-tree上找KNN ( K-nearest neighbor)
BBF(Best Bin First)算法,借助优先队列(这里用最小堆)实现。从根开始,在KD-tree上找路子的时候,错过的点先塞到优先队列里,自己先一个劲儿扫到leaf;然后再从队列里取出目前key值最小的(这里是是ki维上的距离最小者),重复上述过程,一个劲儿扫到leaf;直到队列找空了,或者已经重复了200遍了停止。
 
Step1: img2featuresKD-tree;  kd_root = kdtree_build( feat2, n2 );在这里,ki是选取均方差最大的那个维度,kv是各特征点在那个维度上的median值,features是你率领的整个儿子孙子特征大军,n是你儿子孙子个数。
/** a node in a k-d tree */
struct kd_node{
     int ki;                      /**< partition key index */
     double kv;                   /**< partition key value */
     int leaf;                    /**< 1 if node is a leaf, 0 otherwise */
     struct feature* features;    /**< features at this node */
     int n;                       /**< number of features */
     struct kd_node* kd_left;     /**< left child */
     struct kd_node* kd_right;    /**< right child */
};
Step2: img1的每个featKD-tree里找k个最近邻,这里k=2
k = kdtree_bbf_knn( kd_root, feat, 2, &nbrs, KDTREE_BBF_MAX_NN_CHKS );
     min_pq = minpq_init();
     minpq_insert( min_pq, kd_root, 0 );
     while( min_pq->n > 0 && t < max_nn_chks ) //队列里有东西就继续搜,同时控制在t<200(即200步内)
     {
         expl = (struct kd_node*)minpq_extract_min( min_pq ); //取出最小的,front & pop
         expl = explore_to_leaf( expl, feat, min_pq ); //从该点开始,explore到leaf,路过的“有意义的点”就塞到最小队列min_pq中。
         for( i = 0; i < expl->n; i++ ) //
         {
              tree_feat = &expl->features[i];
              bbf_data->old_data = tree_feat->feature_data;
              bbf_data->d = descr_dist_sq(feat, tree_feat); //两feat均方差
              tree_feat->feature_data = bbf_data;
              n += insert_into_nbr_array( tree_feat, _nbrs, n, k ); //按从小到大塞到neighbor数组里,到时候取前k个就是 KNN 咯~ n 每次加1或0,表示目前已有的元素个数
         }
         t++;
     }
对“有意义的点”的解释:
struct kd_node* explore_to_leaf( struct kd_node* kd_node, struct feature* feat,
                                     struct min_pq* min_pq )//expl, feat, min_pq
{
     struct kd_node* unexpl, * expl = kd_node;
     double kv;
     int ki;
     while( expl && ! expl->leaf )
     {
         ki = expl->ki;
         kv = expl->kv;
         if( feat->descr[ki] <= kv ) {
              unexpl = expl->kd_right;
              expl = expl->kd_left; //走左边,右边点将被记下来
         }
         else {
              unexpl = expl->kd_left;
              expl = expl->kd_right; //走右边,左边点将被记下来
         }
         minpq_insert( min_pq, unexpl, ABS( kv - feat->descr[ki] ) ) ;//将这些点插入进来,key键值为|kv - feat->descr[ki]| 即第ki维上的差值
     }
     return expl;
}
         Step3: 如果k近邻找到了(k=2),那么判断是否能作为有效特征,d0/d1<0.49就算是咯~
              d0 = descr_dist_sq( feat, nbrs[0] );//计算两特征间squared Euclidian distance
              d1 = descr_dist_sq( feat, nbrs[1] );
              if( d0 < d1 * NN_SQ_DIST_RATIO_THR )//如果d0/d1小于阈值0.49
              {
                   pt1 = cvPoint( cvRound( feat->x ), cvRound( feat->y ) );
                   pt2 = cvPoint( cvRound( nbrs[0]->x ), cvRound( nbrs[0]->y ) );
                   pt2.y += img1->height;
                   cvLine( stacked, pt1, pt2, CV_RGB(255,0,255), 1, 8, 0 );//画线
                   m++;//matches个数
                   feat1[i].fwd_match = nbrs[0];
              }
 
Step2:通过RANSAC算法来消除错配,什么是RANSAC先?
1.       RANSAC (Random Sample Consensus, 随机抽样一致) (from wiki)
该算法做什么呢?呵呵,用一堆数据去搞定一个待定模型,这里所谓的搞定就是一反复测试、迭代的过程,找出一个error最小的模型及其对应的同盟军(consensus set)。用在我们的SIFT特征匹配里,就是说找一个变换矩阵出来,使得尽量多的特征点间都符合这个变换关系。
 
算法思想:
input:
data - a set of observations
model - a model that can be fitted to data
n - the minimum number of data required to fit the model
k - the maximum number of iterations allowed in the algorithm
t - a threshold value for determining when a datum fits a model
d - the number of close data values required to assert that a model fits well to data
output:
best_model - model parameters which best fit the data (or nil if no good model is found)
best_consensus_set - data point from which this model has been estimated
best_error - the error of this model relative to the data
 
iterations := 0
best_model := nil
best_consensus_set := nil
best_error := infinity
(ijuliet)
本站文章除注明转载外,均为本站原创或编译欢迎任何形式的转载,但请务必注明出处,尊重他人劳动,同学习共成长。转载请注明:文章转载自:罗索实验室 [http://www.rosoo.net/a/201108/14854.html]
本文出处:blog.csdn.net/ijuliet 作者:ijuliet
顶一下
(0)
0%
踩一下
(0)
0%
------分隔线----------------------------
相关文章
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
用户名: 验证码:点击我更换图片
栏目列表
将本文分享到微信
织梦二维码生成器
推荐内容