收藏 分享(赏)

optimal combination of stereo camera calibration from arbitrary.pdf

上传人:weiwoduzun 文档编号:1753614 上传时间:2018-08-22 格式:PDF 页数:8 大小:139.18KB
下载 相关 举报
optimal combination of stereo camera calibration from arbitrary.pdf_第1页
第1页 / 共8页
optimal combination of stereo camera calibration from arbitrary.pdf_第2页
第2页 / 共8页
optimal combination of stereo camera calibration from arbitrary.pdf_第3页
第3页 / 共8页
optimal combination of stereo camera calibration from arbitrary.pdf_第4页
第4页 / 共8页
optimal combination of stereo camera calibration from arbitrary.pdf_第5页
第5页 / 共8页
点击查看更多>>
资源描述

1、Tina Memo No. 1991-002Image and Vision Computing, 9(1), 27-32, 1990.Optimal Combination of Stereo Camera Calibration fromArbitrary Stereo Images.N.A.Thacker and J.E.W.Mayhew.Last updated6 / 9 / 2005Imaging Science and Biomedical Engineering Division,Medical School, University of Manchester,Stopford

2、Building, Oxford Road,Manchester, M13 9PT.Optimal Combination of Stereo Camera Calibration from Arbitrary Stereo Images.Neil A. Thacker, John E. W. Mayhew. while at AI Vision Research Unit, University of She eldAbstract.Many stereo correspondence algorithms require relative camera geometry, as the e

3、pipolar constraint is fundamentalto their matching processes. We intend to build a eye/head camera rig to mount on the mobile platform COMODEto enhance the abilities of the TINA system to recover 3D geometry from its environment. Thus we will need tobe able to associate camera geometry with particul

4、ar head con gurations. Generic calibration of such a systemwould require the ability to compute camera geometry from arbitrary stereo images. This paper describes a systemwhich solves this problem using an established corner detector combined with a robust stereo matching algorithmand a variational

5、solution for the camera geometry.Keywords. Calibration, Corner detection, Stereo, Stereo matching.Introduction.We wish to develop a stereo eye/head camera rig which will support similar low level vision competences to primates,these are: foveation, vergence, saccades and tracking. This head con gura

6、tion is currently under constructionFigure 1 and a simulation of the hardware has been used for the work presented here. We wish to be able touse this head with the TINA 1 vision system to recover stereo geometry and generate a 3D representation ofthe world. These low level vision competences will r

7、equire stereo correspondence of well located image features.We show here that we can also use these correspondences to compute the relative camera geometry necessary toprovide epipolar geometry for other stereo matching algorithms. Identi cation of such features can be achievedusing an interest oper

8、ator similar to that developed by Moravec 2. The Plessey group 3 developed this ideafurther and the resulting edge and corner detector was used to obtain structure from motion 4. Thus it seemsnatural to use the Moravec/Plessey corner detector as our starting point.In order to use corners to generate

9、 the necessary camera translation and rotation parameters, we need to robustlymatch the sets of corners obtained. We cannot use the Plessey algorithm here as there may be substantialtranslations between views from two stereo cameras. Also, we cannot make much use of epipolar constraints asthis would

10、 require the camera geometry which we are trying to obtain. This is not a di cult problem to solveprovided we only require a subset of the total number of corners matched.(Figure 1 about here )Estimation of the camera geometry needs to be robust and unbiased, we would prefer to use the variational m

11、ethodproposed by Trivedi 5. However, we would require in excess of 100 data points to provide su cient calibrationaccuracy, which is large compared to the number found and matched in most scenes. For this reason we haveapplied standard statistical methods for data combination to the resulting calibr

12、ation. We have extended thisidea further to the calibration of a moving camera system which moves on a one dimensional trajectory in a spacedescribed by the calibration parameters.Corner Detection and Matching.The corner detector we use is that suggested by Harris and Stephens 2 which calculates an

13、interest operatorde ned according to an auto-correlation of local patches of the image.Muv =(I=u)2 w I=uI=v wI=uI=v w (I=v)2 wwhere u and v are image coordinates and w implies a convolution with a gaussian image mask. Any function of theeigenvalues and of the matrix M will have the property of rotat

14、ion invariance. What is found is that the traceof the matrix Tr(M) = + is large where there is an edge in the image and the determinant Det(M) = is large where there is an edge or a corner. Thus edges are given when either or are large and corners can beidenti ed where both are large. Corner strengt

15、h is de ned asCuv = Det(M) kTr(M)22Corners are identi ed as local maxima in corner strength which are tted to a two dimensional quadratic in orderto improve positional accuracy which has been estimated as 0.3 pixels.Given 5 or more correspondence points in the two images it is possible to compute th

16、e camera translation/rotationparameters for the left to right camera transformation. There are generally an order of magnitude more cornersthan this in even a relatively simple image. The corners are matched using a robust stereo matching algorithmwhich identi es reliable matches.Image tokens can be

17、 matched in some cases using the following heuristics;(a) restricted search strategies (eg epipolars in the case of stereo).(b) local image properties (eg image correlation).(c) uniqueness.(d) disparity gradient ( or smoothness ) constraints.For stereo matching potential matches are sought in a vari

18、able epipolar band, with a width determined by theaccuracy of stereo calibration. As the corner detector nds local maxima in an auto-correlation measure it makessense to compare possible matches between points on the basis of local image cross correlation. Lists of possiblematches are generated, for

19、 corners in the left image to the right and right to left, and ordered in terms of the localimage correlation measure;M =Z 11A 2 wuv IuvI0uv dudvwithA =Z 11wuvI2uv du dvZ 1inftywuvI02uv dudvwherew is a gaussian weighting function. This measure varies between 0 and 1 (close to 1 for good agreement),

20、againthe assumption has been made that there is little rotation about the viewing axis. This measure is invariant to thescale of the registered image intensity ( assuming that no prior knowledge of the lighting conditions and individualcamera aperture settings is available). Weak dependence on the a

21、bsolute image intensity can be reintroduced usingan asymmetry cut on the relative corner strength.jC1 C2jC1 + C2 A value of 0.85 is generally chosen for , this will allow a di erence of 12 in relative corner strength or a factor of1.8 in image intensity.Only if the absolute value of the correlation

22、measure is high ( Mmax ) is the match accepted and added to thelist of possible matches. rho can be set arbitrarily high to ensure that the underlying images are essentially identicaland a value of 0.99 is generally used. We accept that this will inevitably result in some bias in matching abilityfor

23、 front-to-parallel surfaces. Candidate matches are only considered further if they involve the best correlationmeasure Mmax found for that pair of points matched both ways between the left and right images. This algorithmimplicitly enforces one to one matching and also eliminates incorrect matches r

24、esulting when a feature has onlybeen detected in one image.Due to the sparseness of corner data in many regions of an image it is di cult to impose a smoothness or disparitygradient constraint. However, it may be possible in future to constrain possible matching using the results fromless sparse mat

25、ching primatives such as edges.On real images corner detection can be very noisy and setting a generic threshold for corner detection is prob-lematic. Also high frequency textured regions generally give rise to many corners which, on the basis of theabove heuristics, are unmatchable, as there are ma

26、ny similar candidate matches for each feature. Thus in realimages it is di cult to automate the generation of a reliable set of correspondences, potentially preventing suc-cessful ego-motion calculation. What is required is a method of identifying those features which may be unreliablymatched.Unreli

27、able features can be de ned as those which have many candidate matches and consequently may be expectedto be ambiguously matched. Ambiguous matches can be excluded by selecting matches where neither list of othercandidate matches has an entry which is above a value of Mmax . The required value of is

28、 de ned by theexpected variability of the cross-correlation value for correct matches and can be expected to be relatively constantfor all images. can be de ned so that only very unique matches are accepted as good, a value of 0.005 has beenfound generally to be su cient. Such a reliability heuristi

29、c reduces the e ects of feature detection thresholds onthe matching of high frequency features.3If we have temporal match information, a more direct method of selecting reliable matches can be used. Temporalmatches are sought using three dimensional positions of corner features combined with odometr

30、y informationspecifying the expected motion of COMODE. Match lists are generated between temporal pairs of images inexactly the same way as for the stereo matcher. The result is a set of possible matching lists for each point in eachimage to its stereo and temporal counterpart. A subset of correct m

31、atches is then selected by checking that thematching between all sets of stereo and temporal images is consistent.After removal of non-unique matches there were generally between 20 and 100 matches fewer than 2 % of thesewere incorrect. This is enough to obtain an estimate of the camera rotation sui

32、table for epi-polar matching, thoughgenerally too poor to obtain good geometrical accuracy. For this reason a method of combining the results fromsuccessive calibrations was required.Camera Calibration.It is possible to formulate the solution for an arbitrary camera rotation/translation (RT) from tw

33、o sets of corre-sponding vector points in the images xi and x0i using a variational principle 5. The small shifts xi and deltax0ineeded to move these correspondences in each image, so that they satisfy an estimate of the transformation, canbe approximated to linear order in an expansion about the cu

34、rrent solution Appendix 1 giving;xi = FiSrFTirFiSrFTi +r0FiSr0FTix0i = FiSr0FTirFiSrFTi +r0FiSr0FTiFi = xTi (RT)x0irFi = x0Ti (RT)r0FTi = (RT)xiWhere the rotation/translation constraint equation Fi uses the matrix formulation rst suggested by Longuet-Higgins 6, which is a matrix alternative to writi

35、ng the vector constraint equation;Fi = xi:t Rx0i ( = 0)where t is the translation vector. This follows directly from the coordinate transformation equation which is validfor both points in the real world and image coordinates. The transformation matrix T and error matrix S aregiven byT =240 e6 e5e6

36、0 e4e5 e4 035 S =242x 0 00 2y 00 0 2z35where e4e5e6 are the direction cosines ( xyz ) of the translation between the optical centres of the cameras in theleft camera frame.Many of the constraints between elements of the rotation matrix can be imposed in a way that permits a uniquereconstruction of t

37、he rotation matrix. This is done by parameterising the rotation matrix R in terms of Eulerparameters (a quaternion representation Appendix 2). The error matrix allows proper account to be made of theasymetric nature of the x and y corner location accuracy introduced by the camera aspect ratio a with

38、 x = a y.The error in the z direction z is set to zero as per the original implementation by Trivedi. This is a relativelysimple model for the expected errors on the location of corners and a more principled one could be used if known.In our experience all corner locations are determined with the sa

39、me accuracy within a factor of two.An appropriately weighted sum of the minimum shifts required for each point to be independently consistent withthe current transformation can be formed.E =XiEi =Xi( xiS 1 xi + x0Ti S 1 x0i)note also thatEi = F2irFiSrFTi +r0FiSr0FTi = F2i = 2i4The transformation mat

40、rix which is most consistent with the position of the observed correspondences can beobtained. This is done by minimising this sum with respect to the ve free rotation and translation parameterse1;e2;e3;e5;e6 while at the same time enforcing the following constraints.e20 = 1 e21 e22 e23 e24 = 1 e25

41、e26Derivative information can be computed for each correspondence point Appendix 3. However, it was found thatminimisation routines which could make use of this information were not very e cient or robust when used onthis particular minimisation task. Minimisation is best done using a robust numeric

42、al minimisation routine as forexample the simplex minimisation algorithm of Nelder and Mead (see for example 5). This method lends itselfto robust statistical methods should the tted data be found to have a distribution which is non-normal.The Trivedi algorithm has no adjustable parameters and yield

43、s errors in terms of image variables which can be usedto judge the accuracy of the result. This information combined with knowledge of the corner detection accuracyallows rogue points to be iteratively removed from the tting process.The number of corners located in a pair of stereo images may not be

44、 su cient to calibrate the camera geometryaccurately. For this reason we need to be able to combine the estimates of the calibration variables e from severalimages. This can be done using the covariance matrix C (as estimated as in Appendix 3) as followset = Ct(C 1t 1et 1 +C 1e)andC 1t = C 1t 1 + C

45、1Flexibility can be obtained by limiting the size of Ct to that which provides the required calibration accuracy. Thisthen allows the calibration to track any systematic changes in the camera system.Calibrating a Moveable Head.For a system which follows a one dimensional trajectory in a high dimensi

46、onal calibration space we can approximatethis trajectory locally using linear interpolation between data points. The calibration parameters must follow such atrajectory in the case of our simulated head when we restrict the control vergence rotation angles to be symetrical(Here symetrical is de ned

47、only in terms of control signals and places no restriction on the actual orientationof the cameras or their rotation axes). We can parameterize this curve using one free parameter the controlvergence angle of both cameras obtained from accurate odometry. Using this parameter it is possible to interp

48、olatecalibration parameters across a range of camera anglese = (e0( 0) + e00( 00 )=( 00 0)where e0 and e00 the camera transformation parameters at 0 and 00. These estimates can be concatenated intoone calibration vector g which can be estimated from successive observations of e at known given the co

49、varianceC using a kalman lter.gt = gt 1 + Cgt(rge)TC 1(e e)withC 1gt = C 1gt 1 + (rge)TC 1(rge)The intrinsic parameters of the camera system, focal lengths and image centres are required as input parametersand could be assumed to be xed for our camera rig. These can be determined independently using a combinationof optical methods and alternative calibration algorithms 8. We are currently working on several calibrationsystems which are to be uni ed within one statistical framework. The current implementation ignores radialdistortions but the

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 企业管理 > 经营企划

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报