Stereo Evaluation 2015


The stereo 2015 / flow 2015 / scene flow 2015 benchmark consists of 200 training scenes and 200 test scenes (4 color images per scene, saved in loss less png format). Compared to the stereo 2012 and flow 2012 benchmarks, it comprises dynamic scenes for which the ground truth has been established in a semi-automatic process. Our evaluation server computes the percentage of bad pixels averaged over all ground truth pixels of all 200 test images. For this benchmark, we consider a pixel to be correctly estimated if the disparity or flow end-point error is <3px or <5% (for scene flow this criterion needs to be fulfilled for both disparity maps and the flow map). We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing disparity maps and flow fields. More details can be found in Object Scene Flow for Autonomous Vehicles (CVPR 2015).

Our evaluation table ranks all methods according to the number of erroneous pixels. All methods providing less than 100 % density have been interpolated using simple background interpolation as explained in the corresponding header file in the development kit. Legend:

  • D1: Percentage of stereo disparity outliers in first frame
  • D2: Percentage of stereo disparity outliers in second frame
  • Fl: Percentage of optical flow outliers
  • SF: Percentage of scene flow outliers (=outliers in either D0, D1 or Fl)
  • bg: Percentage of outliers averaged only over background regions
  • fg: Percentage of outliers averaged only over foreground regions
  • all: Percentage of outliers averaged over all ground truth pixels


Note: On 13.03.2017 we have fixed several small errors in the flow (noc+occ) ground truth of the dynamic foreground objects and manually verified all images for correctness by warping them according to the ground truth. As a consequence, all error numbers have decreased slightly. Please download the devkit and the annotations with the improved ground truth for the training set again if you have downloaded the files prior to 13.03.2017 and consider reporting these new number in all future publications. The last leaderboards before these corrections can be found here (optical flow 2015) and here (scene flow 2015). The leaderboards for the KITTI 2015 stereo benchmarks did not change.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Flow: Method uses optical flow (2 temporally adjacent images)
  • Multiview: Method uses more than 2 temporally adjacent images
  • Motion stereo: Method uses epipolar geometry for computing optical flow
  • Additional training data: Use of additional data sources for training (see details)

Evaluation ground truth        Evaluation area

Method Setting Code D1-bg D1-fg D1-all Density Runtime Environment
1 MoCha-Stereo 1.36 % 2.43 % 1.53 % 100.00 % 0.27 s NVIDIA Tesla A6000 (PyTorch)
2 DiffuVolume 1.35 % 2.51 % 1.54 % 100.00 % 0.36 s GPU @ 2.5 Ghz (Python)
3 GANet+ADL 1.38 % 2.38 % 1.55 % 100.00 % 0.67s NVIDIA RTX 3090 (PyTorch)
4 MC-Stereo 1.36 % 2.51 % 1.55 % 100.00 % 0.40 s 1 core @ 2.5 Ghz (Python)
5 IGEV-ICGNet 1.38 % 2.55 % 1.57 % 100.00 % 0.18 s NVIDIA Tesla A5000 (Pytorch)
6 yjlig 1.37 % 2.62 % 1.58 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
7 UGNet 1.34 % 2.77 % 1.58 % 100.00 % 0.2 s GPU @ 3.0 Ghz (Python)
8 Any-IGEV 1.43 % 2.35 % 1.58 % 100.00 % 0.32 s GPU @ 2.5 Ghz (Python)
9 OpenStereo-IGEV code 1.44 % 2.31 % 1.59 % 100.00 % 0.18 s NVIDIA-3090
ERROR: Wrong syntax in BIBTEX file.
10 GSSNet 1.31 % 2.96 % 1.59 % 100.00 % 0.78 s 1 core @ 2.5 Ghz (C/C++)
11 CWA-stereo-v1 1.38 % 2.66 % 1.59 % 100.00 % 0.23 s 2080
12 MDM-Stereo 1.28 % 3.13 % 1.59 % 100.00 % 0.09 s 1 core @ 2.5 Ghz (C/C++)
13 CroCo-Stereo code 1.38 % 2.65 % 1.59 % 100.00 % 0.93s NVIDIA A100
P. Weinzaepfel, T. Lucas, V. Leroy, Y. Cabon, V. Arora, R. Br\'egier, G. Csurka, L. Antsfeld, B. Chidlovskii and J. Revaud: CroCo v2: Improved Cross-view Completion Pre-training for Stereo Matching and Optical Flow. ICCV 2023.
14 IGEV-Stereo code 1.38 % 2.67 % 1.59 % 100.00 % 0.18 s NVIDIA RTX 3090 (PyTorch)
G. Xu, X. Wang, X. Ding and X. Yang: Iterative Geometry Encoding Volume for Stereo Matching. CVPR 2023.
15 AMSCF-Net 1.32 % 2.98 % 1.60 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
16 CGF-ACV code 1.32 % 3.08 % 1.61 % 100.00 % 0.24 s NVIDIA RTX 3090 (PyTorch)
17 UPFNet 1.38 % 2.85 % 1.62 % 100.00 % 0.25 s 1 core @ 2.5 Ghz (C/C++)
Q. Chen, B. Ge and J. Quan: Unambiguous Pyramid Cost Volumes Fusion for Stereo Matching. IEEE Transactions on Circuits and Systems for Video Technology 2023.
18 yjlig 1.42 % 2.66 % 1.62 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
19 SSMF 1.37 % 2.91 % 1.63 % 100.00 % 0.20 s 1 core @ 2.5 Ghz (Python)
20 UDG 1.39 % 2.88 % 1.64 % 100.00 % 0.4 s 1 core @ 2.5 Ghz (C/C++)
21 SCVFormer 1.31 % 3.26 % 1.64 % 100.00 % 0.09 s NVIDIA RTX 3090 (PyTorch)
22 DiffuVolume 1.39 % 2.93 % 1.65 % 100.00 % 0.36 s GPU @ 2.5 Ghz (Python)
23 M-FUSE
This method uses optical flow information.
This method makes use of multiple (>2) views.
code 1.40 % 2.91 % 1.65 % 100.00 % 1.3 s GPU
L. Mehl, A. Jahedi, J. Schmalfuss and A. Bruhn: M-FUSE: Multi-frame Fusion for Scene Flow Estimation. Proc. Winter Conference on Applications of Computer Vision (WACV) 2023.
24 SF2SE3
This method uses optical flow information.
code 1.40 % 2.91 % 1.65 % 100.00 % 2.7 s GPU @ >3.5 Ghz (Python)
L. Sommer, P. Schröppel and T. Brox: SF2SE3: Clustering Scene Flow into SE (3)-Motions via Proposal and Selection. DAGM German Conference on Pattern Recognition 2022.
25 LEAStereo code 1.40 % 2.91 % 1.65 % 100.00 % 0.30 s GPU @ 2.5 Ghz (Python)
X. Cheng, Y. Zhong, M. Harandi, Y. Dai, X. Chang, H. Li, T. Drummond and Z. Ge: Hierarchical Neural Architecture Search for Deep Stereo Matching. Advances in Neural Information Processing Systems 2020.
26 EFLOW
This method uses optical flow information.
1.40 % 2.91 % 1.65 % 100.00 % 0.06 s 1 core @ 2.5 Ghz (Python)
27 SplatFlow3D
This method uses optical flow information.
code 1.40 % 2.91 % 1.65 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
28 LoS 1.42 % 2.81 % 1.65 % 100.00 % 0.19 s 1 core @ 2.5 Ghz (Python)
29 ACVNet code 1.37 % 3.07 % 1.65 % 100.00 % 0.2 s NVIDIA RTX 3090 (PyTorch)
G. Xu, J. Cheng, P. Guo and X. Yang: Attention Concatenation Volume for Accurate and Efficient Stereo Matching. CVPR 2022.
30 SPRNet 1.37 % 3.12 % 1.66 % 100.00 % 0.2 s GPU @ >3.5 Ghz (Python)
31 DCANet 1.42 % 2.91 % 1.66 % 100.00 % 0.19 s 1 core @ 2.5 Ghz (C/C++)
32 PCWNet code 1.37 % 3.16 % 1.67 % 100.00 % 0.44 s 1 core @ 2.5 Ghz (C/C++)
Z. Shen, Y. Dai, X. Song, Z. Rao, D. Zhou and L. Zhang: PCW-Net: Pyramid Combination and Warping Cost Volume for Stereo Matching. European Conference on Computer Vision(ECCV) 2022.
33 LaC+GANet code 1.44 % 2.83 % 1.67 % 100.00 % 1.8 s GPU @ 2.5 Ghz (Python)
B. Liu, H. Yu and Y. Long: Local Similarity Pattern and Cost Self- Reassembling for Deep Stereo Matching Networks. Proceedings of the AAAI Conference on Artificial Intelligence 2022.
34 CREStereo code 1.45 % 2.86 % 1.69 % 100.00 % 0.41 s GPU @ >3.5 Ghz (Python)
J. Li, P. Wang, P. Xiong, T. Cai, Z. Yan, L. Yang, J. Liu, H. Fan and S. Liu: Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation. 2022.
35 GU 1.42 % 3.05 % 1.69 % 100.00 % 1 s GPU @ 2.5 Ghz (Python)
36 SCVFormer 1.34 % 3.46 % 1.70 % 100.00 % 0.09 s NVIDIA RTX 3090 (PyTorch)
37 DuMa-Net 1.40 % 3.18 % 1.70 % 100.00 % 0.38 s PyTorch GPU
S. Sun, R. liu and S. Sun: Range-free disparity estimation with self- adaptive dual-matching. IET Computer Vision .
38 EGA-Stereo code 1.42 % 3.12 % 1.70 % 100.00 % 0.41 s 1 core @ 2.5 Ghz (Python)
39 Any-RAFT 1.44 % 3.04 % 1.70 % 100.00 % 0.34 s GPU @ Nvidia A40 (Python)
40 SAGIF-GMM 1.52 % 2.66 % 1.71 % 100.00 % 0.37 s GPU @ 2.5 Ghz (Python)
41 IEG-Net 1.39 % 3.31 % 1.71 % 100.00 % 0.40 s 1 core @ 2.5 Ghz (Python)
42 DANet-Stereo 1.41 % 3.26 % 1.72 % 100.00 % 2.7 s GPU @ 2.5 Ghz (Python)
43 DKT-IGEV 1.46 % 3.05 % 1.72 % 100.00 % 0.18 s 1 core @ 2.5 Ghz (C/C++)
44 AFNet 1.36 % 3.61 % 1.73 % 100.00 % 0.25 s 1 core @ 2.5 Ghz (Python)
45 Patchmatch Stereo++ code 1.55 % 2.71 % 1.74 % 100.00 % 0.2 s
W. Ren, Q. Liao, Z. Shao, X. Lin, X. Yue, Y. Zhang and Z. Lu: Patchmatch Stereo++: Patchmatch Binocular Stereo with Continuous Disparity Optimization. Proceedings of the 31st ACM International Conference on Multimedia 2023.
46 OnestageStereo code 1.56 % 2.62 % 1.74 % 100.00 % 0.02 s GPU @ 2.5 Ghz (Python)
47 CSPN 1.51 % 2.88 % 1.74 % 100.00 % 1.0 s GPU @ 2.5 Ghz (Python)
X. Cheng, P. Wang and R. Yang: Learning Depth with Convolutional Spatial Propagation Network. IEEE Transactions on Pattern Analysis and Machine Intelligence(T-PAMI) 2019.
48 NeXt-Stereo 1.51 % 2.93 % 1.75 % 100.00 % 0.06 s 1 core @ 2.5 Ghz (Python)
49 ProNet 1.48 % 3.11 % 1.75 % 100.00 % 0.33 s GPU @ 2.5 Ghz (Python)
50 ASNet 1.45 % 3.27 % 1.75 % 100.00 % 0.17 s GPU @ >3.5 Ghz (Python)
51 GMOStereo 1.60 % 2.54 % 1.76 % 100.00 % 0.30 s GPU @ 2.5 Ghz (Python)
52 IGEV_TEST2 1.50 % 3.06 % 1.76 % 100.00 % 0.06 s 1 core @ 2.5 Ghz (C/C++)
53 IGEV_15 1.50 % 3.06 % 1.76 % 100.00 % 0.07 s 1 core @ 2.5 Ghz (C/C++)
54 IGE_Corr 1.50 % 3.06 % 1.76 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
55 DLNR 1.60 % 2.59 % 1.76 % 100.00 % ~0.3 s GPU @ 2.5 Ghz (Python)
56 LaC+GwcNet code 1.43 % 3.44 % 1.77 % 100.00 % 0. 65 s GPU @ 2.5 Ghz (Python)
B. Liu, H. Yu and Y. Long: Local Similarity Pattern and Cost Self- Reassembling for Deep Stereo Matching Networks. Proceedings of the AAAI Conference on Artificial Intelligence 2022.
57 GMStereo code 1.49 % 3.14 % 1.77 % 100.00 % 0.17 s GPU (Python)
H. Xu, J. Zhang, J. Cai, H. Rezatofighi, F. Yu, D. Tao and A. Geiger: Unifying Flow, Stereo and Depth Estimation. arXiv preprint arXiv:2211.05783 2022.
58 CFNet+PPF 1.47 % 3.26 % 1.77 % 100.00 % 0.22 s 1 core @ 2.5 Ghz (Python)
59 UNI code 1.51 % 3.06 % 1.77 % 100.00 % 2 s 1 core @ 2.5 Ghz (Python)
60 D2Stereo 1.58 % 2.70 % 1.77 % 100.00 % 0.25 s GPU @ 2.5 Ghz (Python)
61 NLCA-Net v2 code 1.41 % 3.56 % 1.77 % 100.00 % 0.67 s GPU @ >3.5 Ghz (Python)
Z. Rao, D. Yuchao, S. Zhelun and H. Renjie: Rethinking Training Strategy in Stereo Matching. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS .
62 GANet+DSMNet 1.48 % 3.23 % 1.77 % 100.00 % 2.0 s GPU @ 2.5 Ghz (C/C++)
F. Zhang, X. Qi, R. Yang, V. Prisacariu, B. Wah and P. Torr: Domain-invariant Stereo Matching Networks. Europe Conference on Computer Vision (ECCV) 2020.
63 DVANet 1.47 % 3.32 % 1.78 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (Python)
64 PFSMNet code 1.54 % 3.02 % 1.79 % 100.00 % 0.31 s 1 core @ 2.5 Ghz (C/C++)
K. Zeng, Y. Wang, Q. Zhu, J. Mao and H. Zhang: Deep Progressive Fusion Stereo Network. IEEE Transactions on Intelligent Transportation Systems 2021.
65 DIGEV 1.66 % 2.47 % 1.80 % 100.00 % 0.18 s 1 core @ 2.5 Ghz (C/C++)
66 SUW-Stereo 1.47 % 3.45 % 1.80 % 100.00 % 1.8 s 1 core @ 2.5 Ghz (C/C++)
H. Ren, A. Raj, M. El-Khamy and J. Lee: SUW-Learn: Joint Supervised, Unsupervised, Weakly Supervised Deep Learning for Monocular Depth Estimation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2020.
67 TemporalStereo
This method makes use of multiple (>2) views.
code 1.61 % 2.78 % 1.81 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (Python)
Y. Zhang, M. Poggi and S. Mattoccia: TemporalStereo: Efficient Spatial-Temporal Stereo Matching Network. IROS 2023.
68 Binary TTC
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 2 s GPU @ 1.0 Ghz (Python)
A. Badki, O. Gallo, J. Kautz and P. Sen: Binary TTC: A Temporal Geofence for Autonomous Navigation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
69 ScaleRAFT+RBO
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
70 CamLiRAFT
This method uses optical flow information.
code 1.48 % 3.46 % 1.81 % 100.00 % 1 s GPU @ 2.5 Ghz (Python + C/C++)
H. Liu, T. Lu, Y. Xu, J. Liu and L. Wang: Learning Optical Flow and Scene Flow with Bidirectional Camera-LiDAR Fusion. arXiv preprint arXiv:2303.12017 2023.
71 ScaleRAFT
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
72 Scale-flow-Nerf
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
73 SFG
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
74 Scale-flow
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.8 s GPU @ 2.5 Ghz (Python)
H. Ling, Q. Sun, Z. Ren, Y. Liu, H. Wang and Z. Wang: Scale-flow: Estimating 3D Motion from Video. Proceedings of the 30th ACM International Conference on Multimedia 2022.
75 RAFT3D+mscv
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
76 CamLiRAFT-NR
This method uses optical flow information.
code 1.48 % 3.46 % 1.81 % 100.00 % 1 s GPU @ 2.5 Ghz (Python + C/C++)
H. Liu, T. Lu, Y. Xu, J. Liu and L. Wang: Learning Optical Flow and Scene Flow with Bidirectional Camera-LiDAR Fusion. arXiv preprint arXiv:2303.12017 2023.
77 RAFT3D+MSCV+ROB
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
78 RAFT-3D
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 2 s GPU @ 2.5 Ghz (Python + C/C++)
Z. Teed and J. Deng: RAFT-3D: Scene Flow using Rigid-Motion Embeddings. arXiv preprint arXiv:2012.00726 2020.
79 ScaleRAFT3D
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
80 GANet-deep code 1.48 % 3.46 % 1.81 % 100.00 % 1.8 s GPU @ 2.5 Ghz (Python)
F. Zhang, V. Prisacariu, R. Yang and P. Torr: GA-Net: Guided Aggregation Net for End-to-end Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
81 CamLiFlow
This method uses optical flow information.
code 1.48 % 3.46 % 1.81 % 100.00 % 1.2 s GPU @ 2.5 Ghz (Python + C/C++)
H. Liu, T. Lu, Y. Xu, J. Liu, W. Li and L. Chen: CamLiFlow: Bidirectional Camera-LiDAR Fusion for Joint Optical Flow and Scene Flow Estimation. CVPR 2022.
82 Stereo expansion
This method uses optical flow information.
code 1.48 % 3.46 % 1.81 % 100.00 % 2 s GPU @ 2.5 Ghz (Python)
G. Yang and D. Ramanan: Upgrading Optical Flow to 3D Scene Flow through Optical Expansion. CVPR 2020.
83 RAFT-3D++
This method uses optical flow information.
1.48 % 3.46 % 1.81 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (Python)
84 S3GNet-C 1.54 % 3.16 % 1.81 % 100.00 % 0.13 s GPU @ 2.5 Ghz (Python)
85 FGDS-Net 1.49 % 3.46 % 1.82 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (Python)
86 ADStereo 1.53 % 3.27 % 1.82 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
87 GwcNet+PPF 1.53 % 3.27 % 1.82 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (Python)
88 TBFE-Net 1.52 % 3.36 % 1.82 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (Python)
89 OptStereo 1.50 % 3.43 % 1.82 % 100.00 % 0.10 s GPU @ 2.5 Ghz (Python)
H. Wang, R. Fan, P. Cai and M. Liu: PVStereo: Pyramid voting module for end-to-end self-supervised stereo matching. IEEE Robotics and Automation Letters 2021.
90 RAFT-Stereo code 1.58 % 3.05 % 1.82 % 100.00 % 0.38 s 1 core @ 2.5 Ghz (Python)
91 RCGSNP 1.56 % 3.17 % 1.83 % 100.00 % 0.12 s GPU @ 2.5 Ghz (Python)
92 URDAD_1 1.53 % 3.34 % 1.83 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
93 LoS_RVC 1.58 % 3.08 % 1.83 % 100.00 % 0.19 s 1 core @ 2.5 Ghz (C/C++)
94 NLCA-Net-3 code 1.45 % 3.78 % 1.83 % 100.00 % 0.44 s >8 cores @ 3.5 Ghz (C/C++)
Z. Rao, M. He, Y. Dai, Z. Zhu, B. Li and R. He: NLCA-Net: a non-local context attention network for stereo matching. APSIPA Transactions on Signal and Information Processing 2020.
95 GOAT 1.71 % 2.51 % 1.84 % 100.00 % 0.29 s 1 core @ 2.5 Ghz (Python)
96 OAGNet18 1.71 % 2.51 % 1.84 % 100.00 % 0.4 s GPU @ 3.0 Ghz (Python)
97 AMNet 1.53 % 3.43 % 1.84 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python)
X. Du, M. El-Khamy and J. Lee: AMNet: Deep Atrous Multiscale Stereo Disparity Estimation Networks. 2019.
98 FAPEEM 1.61 % 3.08 % 1.85 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
99 UCFNet_RVC code 1.57 % 3.33 % 1.86 % 100.00 % 0.21 s GPU @ 2.5 Ghz (Python)
Z. Shen, X. Song, Y. Dai, D. Zhou, Z. Rao and L. Zhang: Digging Into Uncertainty-Based Pseudo- Label for Robust Stereo Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 2023.
100 PSMNet+ 1.51 % 3.60 % 1.86 % 100.00 % 0.41 s GPU @ 2.5 Ghz (Python)
101 URDAD 1.54 % 3.54 % 1.87 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
102 MAF-Stereo code 1.62 % 3.15 % 1.87 % 100.00 % 0.07 s GPU @ 2.5 Ghz (Python)
103 EAC-Stereo code 1.52 % 3.68 % 1.88 % 100.00 % 0.38 s 1 core @ 2.5 Ghz (Python)
104 CFNet code 1.54 % 3.56 % 1.88 % 100.00 % 0.18 s 1 core @ 2.5 Ghz (Python)
Z. Shen, Y. Dai and Z. Rao: CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
Z. Shen, X. Song, Y. Dai, D. Zhou, Z. Rao and L. Zhang: Digging Into Uncertainty-Based Pseudo- Label for Robust Stereo Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 2023.
105 non-parametric 1.56 % 3.49 % 1.88 % 100.00 % 0.34 s GPU @ 2.5 Ghz (Python)
106 RigidMask+ISF
This method uses optical flow information.
code 1.53 % 3.65 % 1.89 % 100.00 % 3.3 s GPU @ 2.5 Ghz (Python)
G. Yang and D. Ramanan: Learning to Segment Rigid Motions from Two Frames. CVPR 2021.
107 AcfNet code 1.51 % 3.80 % 1.89 % 100.00 % 0.48 s GPU @ 2.5 Ghz (Python)
Y. Zhang, Y. Chen, X. Bai, S. Yu, K. Yu, Z. Li and K. Yang: Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching. AAAI 2020.
108 TemporalCoEx 1.71 % 2.78 % 1.89 % 100.00 % 0.04 s GPU @ 2.5 Ghz (Python)
109 PSMNet+PPF 1.55 % 3.63 % 1.89 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (Python)
110 EAC-Stereo 1.52 % 3.87 % 1.91 % 100.00 % 0.38 s 1 core @ 2.5 Ghz (Python)
111 Fast-ACV+ code 1.68 % 3.06 % 1.91 % 100.00 % 0.04s 1 core @ 2.5 Ghz (Python)
112 NLCA_NET_v2_RVC 1.51 % 3.97 % 1.92 % 100.00 % 0.67 s GPU @ 2.5 Ghz (Python)
Z. Rao, M. He, Y. Dai, Z. Zhu, B. Li and R. He: NLCA-Net: a non-local context attention network for stereo matching. APSIPA Transactions on Signal and Information Processing 2020.
113 CDN code 1.66 % 3.20 % 1.92 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
D. Garg, Y. Wang, B. Hariharan, M. Campbell, K. Weinberger and W. Chao: Wasserstein Distances for Stereo Disparity Estimation. Advances in Neural Information Processing Systems 2020.
114 Abc-Net 1.47 % 4.20 % 1.92 % 100.00 % 0.83 s 4 core @ 2.5 Ghz (Python)
X. Li, Y. Fan, G. Lv and H. Ma: Area-based correlation and non-local attention network for stereo matching. The Visual Computer 2021.
115 GANet-15 code 1.55 % 3.82 % 1.93 % 100.00 % 0.36 s GPU (Pytorch)
F. Zhang, V. Prisacariu, R. Yang and P. Torr: GA-Net: Guided Aggregation Net for End-to-end Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
116 PCVNet 1.68 % 3.19 % 1.93 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
117 MDCTest code 1.65 % 3.37 % 1.94 % 100.00 % MDCT s 1 core @ 2.5 Ghz (Python)
118 CAL-Net 1.59 % 3.76 % 1.95 % 100.00 % 0.44 s 2 cores @ 2.5 Ghz (Python)
S. Chen, B. Li, W. Wang, H. Zhang, H. Li and Z. Wang: Cost Affinity Learning Network for Stereo Matching. IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021 2021.
119 SASNet 1.61 % 3.65 % 1.95 % 100.00 % 0.21 s GPU @ >3.5 Ghz (Python)
120 NLCA-Net code 1.53 % 4.09 % 1.96 % 100.00 % 0.6 s 1 core @ 2.5 Ghz (C/C++)
Z. Rao, M. He, Y. Dai, Z. Zhu, B. Li and R. He: NLCA-Net: a non-local context attention network for stereo matching. APSIPA Transactions on Signal and Information Processing 2020.
121 CFNet_RVC code 1.65 % 3.53 % 1.96 % 100.00 % 0.22 s GPU @ 2.5 Ghz (Python)
Z. Shen, Y. Dai and Z. Rao: CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
Z. Shen, X. Song, Y. Dai, D. Zhou, Z. Rao and L. Zhang: Digging Into Uncertainty-Based Pseudo- Label for Robust Stereo Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 2023.
122 High_U+A_coex 1.63 % 3.60 % 1.96 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
123 PGNet 1.64 % 3.60 % 1.96 % 100.00 % 0.7 s 1 core @ 2.5 Ghz (python)
S. Chen, Z. Xiang, C. Qiao, Y. Chen and T. Bai: PGNet: Panoptic parsing guided deep stereo matching. Neurocomputing 2021.
124 GCGANet 1.65 % 3.59 % 1.97 % 100.00 % 0.15 s 1 core @ 2.5 Ghz (Python)
125 DMCNet 1.49 % 4.40 % 1.97 % 100.00 % 0.27 s GPU @ 2.5 Ghz (Python)
126 HITNet code 1.74 % 3.20 % 1.98 % 100.00 % 0.02 s GPU @ 2.5 Ghz (C/C++)
V. Tankovich, C. Häne, Y. Zhang, A. Kowdle, S. Fanello and S. Bouaziz: HITNet: Hierarchical Iterative Tile Refinement Network for Real-time Stereo Matching. CVPR 2021.
127 SGNet 1.63 % 3.76 % 1.99 % 100.00 % 0.6 s 1 core @ 2.5 Ghz (Python + C/C++)
S. Chen, Z. Xiang, C. Qiao, Y. Chen and T. Bai: SGNet: Semantics Guided Deep Stereo Matching. Proceedings of the Asian Conference on Computer Vision (ACCV) 2020.
128 GEMA-Stereo 1.66 % 3.65 % 1.99 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python)
129 ICGNet-gwc 1.62 % 3.90 % 2.00 % 100.00 % 0.15 s 1 core @ 2.5 Ghz (C/C++)
130 CSN code 1.59 % 4.03 % 2.00 % 100.00 % 0.6 s 1 core @ 2.5 Ghz (Python)
X. Gu, Z. Fan, S. Zhu, Z. Dai, F. Tan and P. Tan: Cascade cost volume for high-resolution multi-view stereo and stereo matching. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020.
131 PCMAnet code 1.71 % 3.55 % 2.02 % 100.00 % 0.27 s GPU @ 2.5 Ghz (Python)
132 CoEx code 1.74 % 3.41 % 2.02 % 100.00 % 0.027 s GPU RTX 2080Ti (Python)
A. Bangunharcana, J. Cho, S. Lee, I. Kweon, K. Kim and S. Kim: Correlate-and-Excite: Real-Time Stereo Matching via Guided Cost Volume Excitation. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021.
133 HD^3-Stereo code 1.70 % 3.63 % 2.02 % 100.00 % 0.14 s NVIDIA Pascal Titan XP
Z. Yin, T. Darrell and F. Yu: Hierarchical Discrete Distribution Decomposition for Match Density Estimation. CVPR 2019.
134 GFGANet 1.62 % 4.04 % 2.02 % 100.00 % 0.39 s 1 core @ 2.5 Ghz (C/C++)
135 SCV-Stereo code 1.67 % 3.78 % 2.02 % 100.00 % 0.08 s GPU @ 2.5 Ghz (Python)
H. Wang, R. Fan and M. Liu: SCV-Stereo: Learning stereo matching from a sparse cost volume. 2021 IEEE International Conference on Image Processing (ICIP) 2021.
136 GDANet 1.61 % 4.08 % 2.02 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (Python)
137 AANet+ code 1.65 % 3.96 % 2.03 % 100.00 % 0.06 s NVIDIA V100 GPU
H. Xu and J. Zhang: AANet: Adaptive Aggregation Network for Efficient Stereo Matching. CVPR 2020.
138 OB_GWC 1.59 % 4.32 % 2.04 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
139 ED-Net 1.71 % 3.80 % 2.05 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
140 OGMNet_WO_GP_SA 1.76 % 3.55 % 2.06 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
141 LR-PSMNet code 1.65 % 4.13 % 2.06 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
W. Chuah, R. Tennakoon, R. Hoseinnezhad, A. Bab-Hadiashar and D. Suter: Adjusting Bias in Long Range Stereo Matching: A semantics guided approach. 2020.
142 OB_COEX_1 1.77 % 3.56 % 2.07 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
143 iRaftStereo_RVC 1.88 % 3.03 % 2.07 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
H. Jiang, R. Xu and W. Jiang: An Improved RaftStereo Trained with A Mixed Dataset for the Robust Vision Challenge 2022. arXiv preprint arXiv:2210.12785 2022.
144 PSM + SMD-Nets code 1.69 % 4.01 % 2.08 % 100.00 % 0.41 s 1 core @ 2.5 Ghz (Python + C/C++)
F. Tosi, Y. Liao, C. Schmitt and A. Geiger: SMD-Nets: Stereo Mixture Density Networks. Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
145 MDCNet 1.76 % 3.68 % 2.08 % 100.00 % 0.05 s 1 core @ 2.5 Ghz (C/C++)
W. Chen, X. Jia, M. Wu and Z. Liang: Multi-Dimensional Cooperative Network for Stereo Matching. IEEE Robotics and Automation Letters 2022.
146 EdgeStereo-V2 1.84 % 3.30 % 2.08 % 100.00 % 0.32s Nvidia GTX Titan Xp
X. Song, X. Zhao, L. Fang, H. Hu and Y. Yu: Edgestereo: An effective multi-task learning network for stereo matching and edge detection. International Journal of Computer Vision (IJCV) 2019.
147 3D-MSNet / MSNet3D code 1.75 % 3.87 % 2.10 % 100.00 % 1.5s Python,1080Ti
F. Shamsafar, S. Woerz, R. Rahim and A. Zell: MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2022.
148 GwcNet-g code 1.74 % 3.93 % 2.11 % 100.00 % 0.32 s GPU @ 2.0 Ghz (Python + C/C++)
X. Guo, K. Yang, W. Yang, X. Wang and H. Li: Group-wise correlation stereo network. CVPR 2019.
149 SSPCVNet 1.75 % 3.89 % 2.11 % 100.00 % 0.9 s 1 core @ 2.5 Ghz (Python)
Z. Wu, X. Wu, X. Zhang, S. Wang and L. Ju: Semantic Stereo Matching With Pyramid Cost Volumes. The IEEE International Conference on Computer Vision (ICCV) 2019.
150 WSMCnet code 1.72 % 4.19 % 2.13 % 100.00 % 0.39s Nvidia GTX 1070 (Pytorch)
Y. Wang, H. Wang, G. Yu, M. Yang, Y. Yuan and J. Quan: Stereo Matching Algorithm Based on Three-Dimensional Convolutional Neural Network. Acta Optica Sinica 2019.
151 HSM-1.8x code 1.80 % 3.85 % 2.14 % 100.00 % 0.14 s Titan X Pascal
G. Yang, J. Manela, M. Happold and D. Ramanan: Hierarchical Deep Stereo Matching on High- Resolution Images. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
152 DeepPruner (best) code 1.87 % 3.56 % 2.15 % 100.00 % 0.18 s 1 core @ 2.5 Ghz (C/C++)
S. Duggal, S. Wang, W. Ma, R. Hu and R. Urtasun: DeepPruner: Learning Efficient Stereo Matching via Differentiable PatchMatch. ICCV 2019.
153 W-Stereo-a-r 1.70 % 4.48 % 2.16 % 100.00 % 0.07 s 1 core @ 2.5 Ghz (Python)
154 CGF-F-B 1.75 % 4.20 % 2.16 % 100.00 % 0.26 s GPU @ 2.5 Ghz (Python)
155 PMS++_Fast 1.93 % 3.30 % 2.16 % 100.00 % 0.40 s 1 core @ 2.5 Ghz (C/C++)
156 Stereo-fusion-SJTU 1.87 % 3.61 % 2.16 % 100.00 % 0.7 s Nvidia GTX Titan Xp
X. Song, X. Zhao, H. Hu and L. Fang: EdgeStereo: A Context Integrated Residual Pyramid Network for Stereo Matching. Asian Conference on Computer Vision 2018.
157 OGMNet18 1.97 % 3.16 % 2.17 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
158 AutoDispNet-CSS code 1.94 % 3.37 % 2.18 % 100.00 % 0.9 s 1 core @ 2.5 Ghz (C/C++)
T. Saikia, Y. Marrakchi, A. Zela, F. Hutter and T. Brox: AutoDispNet: Improving Disparity Estimation with AutoML. The IEEE International Conference on Computer Vision (ICCV) 2019.
159 BGNet+ 1.81 % 4.09 % 2.19 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python)
B. Xu, Y. Xu, X. Yang, W. Jia and Y. Guo: Bilateral Grid Learning for Stereo Matching Network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
160 OA_COEX 1.85 % 3.95 % 2.20 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
161 Bi3D code 1.95 % 3.48 % 2.21 % 100.00 % 0.48 s GPU @ 1.5 Ghz (Python)
A. Badki, A. Troccoli, K. Kim, J. Kautz, P. Sen and O. Gallo: Bi3D: Stereo Depth Estimation via Binary Classifications. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
162 NDR 1.88 % 3.87 % 2.21 % 100.00 % 0.05 s 1 core @ 2.5 Ghz (Python)
163 pcanet code 1.94 % 3.57 % 2.21 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
164 dh 1.86 % 4.01 % 2.22 % 100.00 % 1.9 s 1 core @ 2.5 Ghz (C/C++)
F. Zhang, V. Prisacariu, R. Yang and P. Torr: GA-Net: Guided Aggregation Net for End-to-end Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
165 AGDNet 1.77 % 4.44 % 2.22 % 100.00 % 0.08 s 2 cores @ 2.5 Ghz (Python)
166 PSMNet+CBAM 1.78 % 4.42 % 2.22 % 100.00 % 0.36 s NVIDIA RTX 3090 (Python)
167 SENSE
This method uses optical flow information.
code 2.07 % 3.01 % 2.22 % 100.00 % 0.32s GPU, GTX 2080Ti
H. Jiang, D. Sun, V. Jampani, Z. Lv, E. Learned-Miller and J. Kautz: SENSE: A Shared Encoder Network for Scene-Flow Estimation. The IEEE International Conference on Computer Vision (ICCV) 2019.
168 GASN 1.88 % 3.96 % 2.23 % 100.00 % 0.09 s NVIDIA RTX 3090 (PyTorch)
169 PSMNet+Pre 1.79 % 4.44 % 2.23 % 100.00 % 0.36 s NVIDIA RTX 3090 (Python)
170 SegStereo code 1.88 % 4.07 % 2.25 % 100.00 % 0.6 s Nvidia GTX Titan Xp
G. Yang, H. Zhao, J. Shi, Z. Deng and J. Jia: SegStereo: Exploiting Semantic Information for Disparity Estimation. ECCV 2018.
171 DTF_SENSE
This method uses optical flow information.
This method makes use of multiple (>2) views.
2.08 % 3.13 % 2.25 % 100.00 % 0.76 s 1 core @ 2.5 Ghz (C/C++)
R. Schuster, C. Unger and D. Stricker: A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions. IEEE Winter Conference on Applications of Computer Vision (WACV) 2021.
172 OpenStereo-PSMNet code 1.80 % 4.58 % 2.26 % 100.00 % 0.21 s GPU @ 2.0 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
173 MCV-MFC 1.95 % 3.84 % 2.27 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
Z. Liang, Y. Guo, Y. Feng, W. Chen, L. Qiao, L. Zhou, J. Zhang and H. Liu: Stereo Matching Using Multi-level Cost Volume and Multi-scale Feature Constancy. IEEE transactions on pattern analysis and machine intelligence 2019.
174 HSM-1.5x code 1.95 % 3.93 % 2.28 % 100.00 % 0.085 s Titan X Pascal
G. Yang, J. Manela, M. Happold and D. Ramanan: Hierarchical Deep Stereo Matching on High- Resolution Images. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
175 GAANet 1.91 % 4.25 % 2.30 % 100.00 % 0.08 2080tiGPU @ 2.5 Ghz (Python)
176 Separable Convs code 1.90 % 4.36 % 2.31 % 100.00 % 2 s 1 core @ 2.5 Ghz (Python)
R. Rahim, F. Shamsafar and A. Zell: Separable Convolutions for Optimizing 3D Stereo Networks. 2021 IEEE International Conference on Image Processing (ICIP) 2021.
177 Separable Convs code 1.90 % 4.36 % 2.31 % 100.00 % 2 s 1 core @ 2.5 Ghz (Python)
R. Rahim, F. Shamsafar and A. Zell: Separable Convolutions for Optimizing 3D Stereo Networks. 2021 IEEE International Conference on Image Processing (ICIP) 2021.
178 CFP-Net code 1.90 % 4.39 % 2.31 % 100.00 % 0.9 s 8 cores @ 2.5 Ghz (Python)
Z. Zhu, M. He, Y. Dai, Z. Rao and B. Li: Multi-scale Cross-form Pyramid Network for Stereo Matching. arXiv preprint 2019.
179 PSMNet code 1.86 % 4.62 % 2.32 % 100.00 % 0.41 s Nvidia GTX Titan Xp
J. Chang and Y. Chen: Pyramid Stereo Matching Network. arXiv preprint arXiv:1803.08669 2018.
180 GANetREF_RVC code 1.88 % 4.58 % 2.33 % 100.00 % 1.62 s GPU @ >3.5 Ghz (Python + C/C++)
F. Zhang, V. Prisacariu, R. Yang and P. Torr: GA-Net: Guided Aggregation Net for End- to-end Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.
181 TriStereoNet code 1.86 % 4.77 % 2.35 % 100.00 % 0.5 s Python,1080Ti
F. Shamsafar and A. Zell: TriStereoNet: A Trinocular Framework for Multi-baseline Disparity Estimation. arXiv preprint arXiv:2111.12502 2021.
182 MABNet_origin code 1.89 % 5.02 % 2.41 % 100.00 % 0.38 s Nvidia rtx2080ti (Python)
J. Xing, Z. Qi, J. Dong, J. Cai and H. Liu: MABNet: A Lightweight Stereo Network Based on Multibranch Adjustable Bottleneck Module. .
183 gsnet 1.93 % 4.95 % 2.43 % 100.00 % 0.2 s GPU @ 3.0 Ghz (Python)
184 MDTE4 2.06 % 4.32 % 2.43 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (C/C++)
185 MDCTE3 2.06 % 4.32 % 2.43 % 100.00 % 0.06 s 1 core @ 2.5 Ghz (C/C++)
186 SAStereo 2.21 % 3.68 % 2.46 % 100.00 % 0.04 s GPU @ 2.5 Ghz (Python)
187 AFDNet 2.21 % 3.78 % 2.47 % 100.00 % 0.31 s 1 core @ 2.5 Ghz (C/C++)
188 ERSCNet 2.11 % 4.46 % 2.50 % 100.00 % 0.28 s GPU @ 2.5 Ghz (Python)
Anonymous: ERSCNet. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
189 BGNet 2.07 % 4.74 % 2.51 % 100.00 % 0.02 s GPU @ >3.5 Ghz (Python)
B. Xu, Y. Xu, X. Yang, W. Jia and Y. Guo: Bilateral Grid Learning for Stereo Matching Network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
190 UberATG-DRISF
This method uses optical flow information.
2.16 % 4.49 % 2.55 % 100.00 % 0.75 s CPU+GPU @ 2.5 Ghz (Python)
W. Ma, S. Wang, R. Hu, Y. Xiong and R. Urtasun: Deep Rigid Instance Scene Flow. CVPR 2019.
191 AANet code 1.99 % 5.39 % 2.55 % 100.00 % 0.062 s NVIDIA V100 GPU
H. Xu and J. Zhang: AANet: Adaptive Aggregation Network for Efficient Stereo Matching. CVPR 2020.
192 GASN-FA 2.25 % 4.13 % 2.56 % 100.00 % 0.05 s NVIDIA RTX 3090 (PyTorch)
193 PDSNet 2.29 % 4.05 % 2.58 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (Python)
S. Tulyakov, A. Ivanov and F. Fleuret: Practical Deep Stereo (PDS): Toward applications-friendly deep stereo matching. Proceedings of the international conference on Neural Information Processing Systems (NIPS) 2018.
194 DeepPruner (fast) code 2.32 % 3.91 % 2.59 % 100.00 % 0.06 s 1 core @ 2.5 Ghz (C/C++)
S. Duggal, S. Wang, W. Ma, R. Hu and R. Urtasun: DeepPruner: Learning Efficient Stereo Matching via Differentiable PatchMatch. ICCV 2019.
195 FADNet code 2.50 % 3.10 % 2.60 % 100.00 % 0.05 s Tesla V100 (Python)
Q. Wang, S. Shi, S. Zheng, K. Zhao and X. Chu: FADNet: A Fast and Accurate Network for Disparity Estimation. arXiv preprint arXiv:2003.10758 2020.
196 MMStereo 2.25 % 4.38 % 2.61 % 100.00 % 0.04 s Nvidia Titan RTX (Python)
K. Shankar, M. Tjersland, J. Ma, K. Stone and M. Bajracharya: A Learned Stereo Depth System for Robotic Manipulation in Homes. .
197 SCV code 2.22 % 4.53 % 2.61 % 100.00 % 0.36 s Nvidia GTX 1080 Ti
C. Lu, H. Uchiyama, D. Thomas, A. Shimada and R. Taniguchi: Sparse Cost Volume for Efficient Stereo Matching. Remote Sensing 2018.
198 WaveletStereo: 2.24 % 4.62 % 2.63 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
Anonymous: WaveletStereo: Learning wavelet coefficients for stereo matching. arXiv: Computer Vision and Pattern Recognition 2019.
199 RLStereo code 2.09 % 5.38 % 2.64 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (Python)
Anonymous: RLStereo: Real-time Stereo Matching based on Reinforcement Learning. Proceedings of the IEEE/CVF International Conference on Computer Vision 2021.
200 AANet_RVC 2.23 % 4.89 % 2.67 % 100.00 % 0.1 s GPU @ 2.5 Ghz (Python)
H. Xu and J. Zhang: AANet: Adaptive Aggregation Network for Efficient Stereo Matching. CVPR 2020.
201 CRL code 2.48 % 3.59 % 2.67 % 100.00 % 0.47 s Nvidia GTX 1080
J. Pang, W. Sun, J. Ren, C. Yang and Q. Yan: Cascade residual learning: A two-stage convolutional neural network for stereo matching. ICCV Workshop on Geometry Meets Deep Learning 2017.
202 TSNnet_Teacher 2.24 % 4.99 % 2.70 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (Python)
203 2D-MSNet / MSNet2D code 2.49 % 4.53 % 2.83 % 100.00 % 0.4s Python,1080Ti
F. Shamsafar, S. Woerz, R. Rahim and A. Zell: MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2022.
204 GC-NET 2.21 % 6.16 % 2.87 % 100.00 % 0.9 s Nvidia GTX Titan X
A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach and A. Bry: End-to-End Learning of Geometry and Context for Deep Stereo Regression. Proceedings of the International Conference on Computer Vision (ICCV) 2017.
205 PVStereo 2.29 % 6.50 % 2.99 % 100.00 % 0.10 s GPU @ 2.5 Ghz (Python)
H. Wang, R. Fan, P. Cai and M. Liu: PVStereo: Pyramid voting module for end-to-end self-supervised stereo matching. IEEE Robotics and Automation Letters 2021.
206 LRCR 2.55 % 5.42 % 3.03 % 100.00 % 49.2 s Nvidia GTX Titan X
Z. Jie, P. Wang, Y. Ling, B. Zhao, Y. Wei, J. Feng and W. Liu: Left-Right Comparative Recurrent Model for Stereo Matching. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
207 cas-stereo 2.62 % 5.17 % 3.04 % 100.00 % 0.1 s 8 cores @ 2.5 Ghz (Python)
208 TSNnet_student 2.35 % 6.70 % 3.07 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
209 Fast DS-CS code 2.83 % 4.31 % 3.08 % 100.00 % 0.02 s GPU @ 2.0 Ghz (Python + C/C++)
K. Yee and A. Chakrabarti: Fast Deep Stereo with 2D Convolutional Processing of Cost Signatures. WACV 2020 (to appear).
210 AdaStereo 2.59 % 5.55 % 3.08 % 100.00 % 0.41 s GPU @ 2.5 Ghz (Python)
X. Song, G. Yang, X. Zhu, H. Zhou, Z. Wang and J. Shi: AdaStereo: A Simple and Efficient Approach for Adaptive Stereo Matching. CVPR 2021.
X. Song, G. Yang, X. Zhu, H. Zhou, Y. Ma, Z. Wang and J. Shi: AdaStereo: An Efficient Domain-Adaptive Stereo Matching Approach. IJCV 2021.
211 RecResNet code 2.46 % 6.30 % 3.10 % 100.00 % 0.3 s GPU @ NVIDIA TITAN X (Tensorflow)
K. Batsos and P. Mordohai: RecResNet: A Recurrent Residual CNN Architecture for Disparity Map Enhancement. In International Conference on 3D Vision (3DV) 2018.
212 NVStereoNet code 2.62 % 5.69 % 3.13 % 100.00 % 0.6 s NVIDIA Titan Xp
N. Smolyanskiy, A. Kamenev and S. Birchfield: On the Importance of Stereo for Accurate Depth Estimation: An Efficient Semi-Supervised Deep Neural Network Approach. arXiv preprint arXiv:1803.09719 2018.
213 DRR 2.58 % 6.04 % 3.16 % 100.00 % 0.4 s Nvidia GTX Titan X
S. Gidaris and N. Komodakis: Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise Labeling. arXiv preprint arXiv:1612.04770 2016.
214 TSNnet_naive 2.64 % 6.47 % 3.28 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
215 DWARF
This method uses optical flow information.
3.20 % 3.94 % 3.33 % 100.00 % 0.14s - 1.43s TitanXP - JetsonTX2
F. Aleotti, M. Poggi, F. Tosi and S. Mattoccia: Learning end-to-end scene flow by distilling single tasks knowledge. Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) 2020.
216 SsSMnet 2.70 % 6.92 % 3.40 % 100.00 % 0.8 s P100
Y. Zhong, Y. Dai and H. Li: Self-Supervised Learning for Stereo Matching with Self-Improving Ability. arXiv:1709.00930 2017.
217 L-ResMatch code 2.72 % 6.95 % 3.42 % 100.00 % 48 s 1 core @ 2.5 Ghz (C/C++)
A. Shaked and L. Wolf: Improved Stereo Matching with Constant Highway Networks and Reflective Loss. arXiv preprint arxiv:1701.00165 2016.
218 Displets v2 code 3.00 % 5.56 % 3.43 % 100.00 % 265 s >8 cores @ 3.0 Ghz (Matlab + C/C++)
F. Guney and A. Geiger: Displets: Resolving Stereo Ambiguities using Object Knowledge. Conference on Computer Vision and Pattern Recognition (CVPR) 2015.
219 LBPS code 2.85 % 6.35 % 3.44 % 100.00 % 0.39 s GPU @ 2.5 Ghz (C/C++)
P. Knöbelreiter, C. Sormann, A. Shekhovtsov, F. Fraundorfer and T. Pock: Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
220 ACOSF
This method uses optical flow information.
2.79 % 7.56 % 3.58 % 100.00 % 5 min 1 core @ 3.0 Ghz (Matlab + C/C++)
C. Li, H. Ma and Q. Liao: Two-Stage Adaptive Object Scene Flow Using Hybrid CNN-CRF Model. International Conference on Pattern Recognition (ICPR) 2020.
221 CNNF+SGM 2.78 % 7.69 % 3.60 % 100.00 % 71 s TESLA K40C
F. Zhang and B. Wah: Fundamental Principles on Learning New Features for Effective Dense Matching. IEEE Transactions on Image Processing 2018.
222 PBCP 2.58 % 8.74 % 3.61 % 100.00 % 68 s Nvidia GTX Titan X
A. Seki and M. Pollefeys: Patch Based Confidence Prediction for Dense Disparity Map. British Machine Vision Conference (BMVC) 2016.
223 SGM-Net 2.66 % 8.64 % 3.66 % 100.00 % 67 s Titan X
A. Seki and M. Pollefeys: SGM-Nets: Semi-Global Matching With Neural Networks. CVPR 2017.
224 DSMNet-synthetic 3.11 % 6.72 % 3.71 % 100.00 % 1.6 s 4 cores @ 2.5 Ghz (C/C++)
F. Zhang, X. Qi, R. Yang, V. Prisacariu, B. Wah and P. Torr: Domain-invariant Stereo Matching Networks. Europe Conference on Computer Vision (ECCV) 2020.
225 HSM-Net_RVC code 2.74 % 8.73 % 3.74 % 100.00 % 0.97 s GPU @ 2.5 Ghz (Python)
G. Yang, J. Manela, M. Happold and D. Ramanan: Hierarchical deep stereo matching on high-resolution images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019.
226 MABNet_tiny code 3.04 % 8.07 % 3.88 % 100.00 % 0.11 s Nvidia rtx2080ti (Python)
J. Xing, Z. Qi, J. Dong, J. Cai and H. Liu: MABNet: A Lightweight Stereo Network Based on Multibranch Adjustable Bottleneck Module. .
227 MC-CNN-acrt code 2.89 % 8.88 % 3.89 % 100.00 % 67 s Nvidia GTX Titan X (CUDA, Lua/Torch7)
J. Zbontar and Y. LeCun: Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches. Submitted to JMLR .
228 FD-Fusion code 3.22 % 7.44 % 3.92 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
M. Ferrera, A. Boulch and J. Moras: Fast Stereo Disparity Maps Refinement By Fusion of Data-Based And Model-Based Estimations. International Conference on 3D Vision (3DV) 2019.
229 Reversing-PSMNet code 3.13 % 8.70 % 4.06 % 100.00 % 0.41 s 1 core @ 1.5 Ghz (Python)
F. Aleotti, F. Tosi, L. Zhang, M. Poggi and S. Mattoccia: Reversing the cycle: self-supervised deep stereo through enhanced monocular distillation. European Conference on Computer Vision (ECCV) 2020.
230 DGS 3.21 % 8.62 % 4.11 % 100.00 % 0.32 s GPU @ 2.5 Ghz (Python + C/C++)
W. Chuah, R. Tennakoon, A. Bab-Hadiashar and D. Suter: Achieving Domain Robustness in Stereo Matching Networks by Removing Shortcut Learning. arXiv preprint arXiv:2106.08486 2021.
231 PRSM
This method uses optical flow information.
This method makes use of multiple (>2) views.
code 3.02 % 10.52 % 4.27 % 99.99 % 300 s 1 core @ 2.5 Ghz (C/C++)
C. Vogel, K. Schindler and S. Roth: 3D Scene Flow Estimation with a Piecewise Rigid Scene Model. ijcv 2015.
232 RS-IPA 3.13 % 10.05 % 4.28 % 100.00 % 2 min 1 core @ 2.5 Ghz (C/C++)
233 DispNetC code 4.32 % 4.41 % 4.34 % 100.00 % 0.06 s Nvidia GTX Titan X (Caffe)
N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy and T. Brox: A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. CVPR 2016.
234 SGM-Forest 3.11 % 10.74 % 4.38 % 99.92 % 6 seconds 1 core @ 3.0 Ghz (Python/C/C++)
J. Schönberger, S. Sinha and M. Pollefeys: Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching. European Conference on Computer Vision (ECCV) 2018.
235 SSF
This method uses optical flow information.
3.55 % 8.75 % 4.42 % 100.00 % 5 min 1 core @ 2.5 Ghz (Matlab + C/C++)
Z. Ren, D. Sun, J. Kautz and E. Sudderth: Cascaded Scene Flow Prediction using Semantic Segmentation. International Conference on 3D Vision (3DV) 2017.
236 SMV 3.45 % 9.32 % 4.43 % 100.00 % 0.5 s GPU @ 2.5 Ghz (C/C++)
W. Yuan, Y. Zhang, B. Wu, S. Zhu, P. Tan, M. Wang and Q. Chen: Stereo Matching by Self- supervision of Multiscopic Vision. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021.
237 ISF
This method uses optical flow information.
4.12 % 6.17 % 4.46 % 100.00 % 10 min 1 core @ 3 Ghz (C/C++)
A. Behl, O. Jafari, S. Mustikovela, H. Alhaija, C. Rother and A. Geiger: Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?. International Conference on Computer Vision (ICCV) 2017.
238 Content-CNN 3.73 % 8.58 % 4.54 % 100.00 % 1 s Nvidia GTX Titan X (Torch)
W. Luo, A. Schwing and R. Urtasun: Efficient Deep Learning for Stereo Matching. CVPR 2016.
239 MADnet code 3.75 % 9.20 % 4.66 % 100.00 % 0.02 s GPU @ 2.5 Ghz (Python)
A. Tonioni, F. Tosi, M. Poggi, S. Mattoccia and L. Di Stefano: Real-Time self-adaptive deep stereo. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
240 Self-SuperFlow-ft
This method uses optical flow information.
3.81 % 8.92 % 4.66 % 100.00 % 0.13 s GTX 1080 Ti
K. Bendig, R. Schuster and D. Stricker: Self-SuperFlow: Self-supervised Scene Flow Prediction in Stereo Sequences. International Conference on Image Processing (ICIP) 2022.
241 DTF_PWOC
This method uses optical flow information.
This method makes use of multiple (>2) views.
3.91 % 8.57 % 4.68 % 100.00 % 0.38 s RTX 2080 Ti
R. Schuster, C. Unger and D. Stricker: A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions. IEEE Winter Conference on Applications of Computer Vision (WACV) 2021.
242 PSMNet+ [syn2real] 3.17 % 12.47 % 4.72 % 100.00 % 0.41 s GPU @ 2.5 Ghz (Python)
243 P3SNet+ code 4.15 % 7.59 % 4.72 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (Python)
A. Emlek and M. Peker: P3SNet: Parallel Pyramid Pooling Stereo Network. IEEE Transactions on Intelligent Transportation Systems 2023.
244 GwcNet+ [syn2real] 3.23 % 12.79 % 4.82 % 100.00 % 0.41 s GPU @ 2.5 Ghz (Python)
245 VN 4.29 % 7.65 % 4.85 % 100.00 % 0.5 s GPU @ 3.5 Ghz (Python + C/C++)
P. Knöbelreiter and T. Pock: Learned Collaborative Stereo Refinement. German Conference on Pattern Recognition (GCPR) 2019.
246 Anonymous
This method uses optical flow information.
4.07 % 9.41 % 4.96 % 100.00 % 0.1 s GPU @ 2.5 Ghz (Python)
247 MC-CNN-WS code 3.78 % 10.93 % 4.97 % 100.00 % 1.35 s 1 core 2.5 Ghz + K40 NVIDIA, Lua-Torch
S. Tulyakov, A. Ivanov and F. Fleuret: Weakly supervised learning of deep metrics for stereo reconstruction. ICCV 2017.
248 3DMST 3.36 % 13.03 % 4.97 % 100.00 % 93 s 1 core @ >3.5 Ghz (C/C++)
X. Lincheng Li and L. Zhang: 3D Cost Aggregation with Multiple Minimum Spanning Trees for Stereo Matching. submitted to Applied Optics .
249 CBMV_ROB code 3.55 % 12.09 % 4.97 % 100.00 % 250 s 6 core @ 3.0 Ghz (Python + C/C++)
K. Batsos, C. Cai and P. Mordohai: CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
250 OSF+TC
This method uses optical flow information.
This method makes use of multiple (>2) views.
4.11 % 9.64 % 5.03 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Neoral and J. Šochman: Object Scene Flow with Temporal Consistency. 22nd Computer Vision Winter Workshop (CVWW) 2017.
251 P3SNet code 4.40 % 8.28 % 5.05 % 100.00 % 0.01 s GPU @ 2.5 Ghz (Python)
A. Emlek and M. Peker: P3SNet: Parallel Pyramid Pooling Stereo Network. IEEE Transactions on Intelligent Transportation Systems 2023.
252 CBMV code 4.17 % 9.53 % 5.06 % 100.00 % 250 s 6 cores @ 3.0 Ghz (Python,C/C++,CUDA Nvidia TitanX)
K. Batsos, C. Cai and P. Mordohai: CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation. 2018.
253 PWOC-3D
This method uses optical flow information.
code 4.19 % 9.82 % 5.13 % 100.00 % 0.13 s GTX 1080 Ti
R. Saxena, R. Schuster, O. Wasenmüller and D. Stricker: PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation. Intelligent Vehicles Symposium (IV) 2019.
254 StereoVAE 4.25 % 10.18 % 5.23 % 100.00 % 0.03 s Jetson AGX Xavier GPU
Q. Chang, X. Li, X. Xu, X. Liu, Y. Li and J. Miyazaki: StereoVAE: A lightweight stereo matching system using embedded GPUs. International Conference on Robotics and Automation 2023.
255 OSF 2018
This method uses optical flow information.
code 4.11 % 11.12 % 5.28 % 100.00 % 390 s 1 core @ 2.5 Ghz (Matlab + C/C++)
M. Menze, C. Heipke and A. Geiger: Object Scene Flow. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS) 2018.
256 SPS-St code 3.84 % 12.67 % 5.31 % 100.00 % 2 s 1 core @ 3.5 Ghz (C/C++)
K. Yamaguchi, D. McAllester and R. Urtasun: Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation. ECCV 2014.
257 MDP
This method uses stereo information.
4.19 % 11.25 % 5.36 % 100.00 % 11.4 s 4 cores @ 3.5 Ghz (Matlab + C/C++)
A. Li, D. Chen, Y. Liu and Z. Yuan: Coordinating Multiple Disparity Proposals for Stereo Computation. IEEE Conference on Computer Vision and Pattern Recognition 2016.
258 SFF++
This method uses optical flow information.
This method makes use of multiple (>2) views.
4.27 % 12.38 % 5.62 % 100.00 % 78 s 4 cores @ 3.5 Ghz (C/C++)
R. Schuster, O. Wasenmüller, C. Unger, G. Kuschk and D. Stricker: SceneFlowFields++: Multi-frame Matching, Visibility Prediction, and Robust Interpolation for Scene Flow Estimation. International Journal of Computer Vision (IJCV) 2019.
259 OSF
This method uses optical flow information.
code 4.54 % 12.03 % 5.79 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Menze and A. Geiger: Object Scene Flow for Autonomous Vehicles. Conference on Computer Vision and Pattern Recognition (CVPR) 2015.
260 pSGM 4.84 % 11.64 % 5.97 % 100.00 % 7.77 s 4 cores @ 3.5 Ghz (C/C++)
Y. Lee, M. Park, Y. Hwang, Y. Shin and C. Kyung: Memory-Efficient Parametric Semiglobal Matching. IEEE Signal Processing Letters 2018.
261 CSF
This method uses optical flow information.
4.57 % 13.04 % 5.98 % 99.99 % 80 s 1 core @ 2.5 Ghz (C/C++)
Z. Lv, C. Beall, P. Alcantarilla, F. Li, Z. Kira and F. Dellaert: A Continuous Optimization Approach for Efficient and Accurate Scene Flow. European Conf. on Computer Vision (ECCV) 2016.
262 MBM 4.69 % 13.05 % 6.08 % 100.00 % 0.13 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: A Multi-Block-Matching Approach for Stereo. IV 2015.
263 CRD-Fusion code 4.59 % 13.68 % 6.11 % 100.00 % 0.02 s GPU @ 2.5 Ghz (Python)
X. Fan, S. Jeon and B. Fidan: Occlusion-Aware Self-Supervised Stereo Matching with Confidence Guided Raw Disparity Fusion. Conference on Robots and Vision 2022.
264 LDC-S 5.78 % 7.92 % 6.14 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (Python)
265 AAFS+ 5.02 % 11.75 % 6.14 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (Python)
266 PR-Sceneflow
This method uses optical flow information.
code 4.74 % 13.74 % 6.24 % 100.00 % 150 s 4 core @ 3.0 Ghz (Matlab + C/C++)
C. Vogel, K. Schindler and S. Roth: Piecewise Rigid Scene Flow. ICCV 2013.
267 LDCNetLG 5.65 % 9.46 % 6.28 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
268 DispSegNet 4.20 % 16.97 % 6.33 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python)
J. Zhang, K. Skinner, R. Vasudevan and M. Johnson-Roberson: DispSegNet: Leveraging Semantics for End- to-End Learning of Disparity Estimation From Stereo Imagery. IEEE Robotics and Automation Letters 2019.
269 DeepCostAggr code 5.34 % 11.35 % 6.34 % 99.98 % 0.03 s GPU @ 2.5 Ghz (C/C++)
A. Kuzmin, D. Mikushin and V. Lempitsky: End-to-end Learning of Cost-Volume Aggregation for Real-time Dense Stereo. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) 2017.
270 SGM_RVC 5.06 % 13.00 % 6.38 % 100.00 % 0.11 s Nvidia GTX 980
H. Hirschm\"uller: Stereo Processing by Semi-Global Matching and Mutual Information. IEEE Transactions on Pattern Analysis and Machine Intelligence 2008.
271 LDC-L 5.65 % 10.19 % 6.41 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (Python)
272 UHP 5.00 % 13.70 % 6.45 % 100.00 % 0.02 s GPU @ 2.5 Ghz (C/C++)
273 SceneFFields
This method uses optical flow information.
5.12 % 13.83 % 6.57 % 100.00 % 65 s 4 cores @ 3.7 Ghz (C/C++)
R. Schuster, O. Wasenmüller, G. Kuschk, C. Bailer and D. Stricker: SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences. IEEE Winter Conference on Applications of Computer Vision (WACV) 2018.
274 SPS+FF++
This method uses optical flow information.
code 5.47 % 12.19 % 6.59 % 100.00 % 36 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, O. Wasenmüller and D. Stricker: Dense Scene Flow from Stereo Disparity and Optical Flow. ACM Computer Science in Cars Symposium (CSCS) 2018.
275 Flow2Stereo 5.01 % 14.62 % 6.61 % 99.97 % 0.05 s GPU @ 2.5 Ghz (Python)
P. Liu, I. King, M. Lyu and J. Xu: Flow2Stereo: Effective Self-Supervised Learning of Optical Flow and Stereo Matching. CVPR 2020.
276 FSF+MS
This method uses optical flow information.
This method makes use of the epipolar geometry.
This method makes use of multiple (>2) views.
5.72 % 11.84 % 6.74 % 100.00 % 2.7 s 4 cores @ 3.5 Ghz (C/C++)
T. Taniai, S. Sinha and Y. Sato: Fast Multi-frame Stereo Scene Flow with Motion Segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) 2017.
277 AABM 4.88 % 16.07 % 6.74 % 100.00 % 0.08 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: Stereo Image Warping for Improved Depth Estimation of Road Surfaces. IV 2013.
278 test_ours
This method uses optical flow information.
5.62 % 12.81 % 6.82 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
279 SGM+C+NL
This method uses optical flow information.
code 5.15 % 15.29 % 6.84 % 100.00 % 4.5 min 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
D. Sun, S. Roth and M. Black: A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. IJCV 2013.
280 SGM+LDOF
This method uses optical flow information.
code 5.15 % 15.29 % 6.84 % 100.00 % 86 s 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
T. Brox and J. Malik: Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. PAMI 2011.
281 SGM+SF
This method uses optical flow information.
5.15 % 15.29 % 6.84 % 100.00 % 45 min 16 core @ 3.2 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
M. Hornacek, A. Fitzgibbon and C. Rother: SphereFlow: 6 DoF Scene Flow from RGB-D Pairs. CVPR 2014.
282 SNCC 5.36 % 16.05 % 7.14 % 100.00 % 0.08 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: A Two-Stage Correlation Method for Stereoscopic Depth Estimation. DICTA 2010.
283 Permutation Stereo 5.53 % 15.47 % 7.18 % 99.93 % 30 s GPU @ 2.5 Ghz (Matlab)
P. Brousseau and S. Roy: A Permutation Model for the Self- Supervised Stereo Matching Problem. 2022 19th Conference on Robots and Vision (CRV) 2022.
284 PASMnet code 5.41 % 16.36 % 7.23 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
L. Wang, Y. Guo, Y. Wang, Z. Liang, Z. Lin, J. Yang and W. An: Parallax Attention for Unsupervised Stereo Correspondence Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence(T-PAMI) 2020.
285 AAFS 6.27 % 13.95 % 7.54 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
J. Chang, P. Chang and Y. Chen: Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices. Proceedings of the Asian Conference on Computer Vision 2020.
286 Z2ZNCC 6.55 % 13.19 % 7.65 % 99.93 % 0.035s Jetson TX2 GPU @ 1.0 Ghz (CUDA)
Q. Chang, A. Zha, W. Wang, X. Liu, M. Onishi, L. Lei, M. Er and T. Maruyama: Efficient stereo matching on embedded GPUs with zero-means cross correlation. Journal of Systems Architecture 2022.
287 ReS2tAC
This method uses stereo information.
6.27 % 16.07 % 7.90 % 86.03 % 0.06 s Jetson AGX GPU @ 1.5 Ghz (C/C++)
B. Ruf, J. Mohrs, M. Weinmann, S. Hinz and J. Beyerer: ReS2tAC - UAV-Borne Real-Time SGM Stereo Optimized for Embedded ARM and CUDA Devices. Sensors 2021.
288 Self-SuperFlow
This method uses optical flow information.
5.78 % 19.76 % 8.11 % 100.00 % 0.13 s GTX 1080 Ti
K. Bendig, R. Schuster and D. Stricker: Self-SuperFlow: Self-supervised Scene Flow Prediction in Stereo Sequences. International Conference on Image Processing (ICIP) 2022.
289 CSCT+SGM+MF 6.91 % 14.87 % 8.24 % 100.00 % 0.0064 s Nvidia GTX Titan X @ 1.0 Ghz (CUDA)
D. Hernandez-Juarez, A. Chacon, A. Espinosa, D. Vazquez, J. Moure and A. Lopez: Embedded real-time stereo estimation via Semi-Global Matching on the GPU. Procedia Computer Science 2016.
290 3DG-DVO
This method uses optical flow information.
7.62 % 11.44 % 8.26 % 100.00 % 0.04 s GPU @ 1.5 Ghz (Python)
291 MBMGPU 6.61 % 16.70 % 8.29 % 100.00 % 0.0019 s GPU @ 1.0 Ghz (CUDA)
Q. Chang and T. Maruyama: Real-Time Stereo Vision System: A Multi-Block Matching on GPU. IEEE Access 2018.
292 MeshStereo code 5.82 % 21.21 % 8.38 % 100.00 % 87 s 1 core @ 2.5 Ghz (C/C++)
C. Zhang, Z. Li, Y. Cheng, R. Cai, H. Chao and Y. Rui: MeshStereo: A Global Stereo Model With Mesh Alignment Regularization for View Interpolation. The IEEE International Conference on Computer Vision (ICCV) 2015.
293 PCOF + ACTF
This method uses optical flow information.
6.31 % 19.24 % 8.46 % 100.00 % 0.08 s GPU @ 2.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
294 PCOF-LDOF
This method uses optical flow information.
6.31 % 19.24 % 8.46 % 100.00 % 50 s 1 core @ 3.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
295 OASM-Net 6.89 % 19.42 % 8.98 % 100.00 % 0.73 s GPU @ 2.5 Ghz (Python)
A. Li and Z. Yuan: Occlusion Aware Stereo Matching via Cooperative Unsupervised Learning. Proceedings of the Asian Conference on Computer Vision, ACCV 2018.
296 ELAS_RVC code 7.38 % 21.15 % 9.67 % 100.00 % 0.19 s 4 cores @ >3.5 Ghz (C/C++)
A. Geiger, M. Roser and R. Urtasun: Efficient Large-Scale Stereo Matching. ACCV 2010.
297 ELAS code 7.86 % 19.04 % 9.72 % 92.35 % 0.3 s 1 core @ 2.5 Ghz (C/C++)
A. Geiger, M. Roser and R. Urtasun: Efficient Large-Scale Stereo Matching. ACCV 2010.
298 REAF code 8.43 % 18.51 % 10.11 % 100.00 % 1.1 s 1 core @ 2.5 Ghz (C/C++)
C. Cigla: Recursive Edge-Aware Filters for Stereo Matching. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2015.
299 self-raft3d
This method uses optical flow information.
8.15 % 20.86 % 10.27 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
300 iGF
This method makes use of multiple (>2) views.
8.64 % 21.85 % 10.84 % 100.00 % 220 s 1 core @ 3.0 Ghz (C/C++)
R. Hamzah, H. Ibrahim and A. Hassan: Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation. Journal of Visual Communication and Image Representation 2016.
301 OCV-SGBM code 8.92 % 20.59 % 10.86 % 90.41 % 1.1 s 1 core @ 2.5 Ghz (C/C++)
H. Hirschmueller: Stereo processing by semiglobal matching and mutual information. PAMI 2008.
302 TW-SMNet 11.92 % 12.16 % 11.96 % 100.00 % 0.7 s GPU @ 2.5 Ghz (Python)
M. El-Khamy, H. Ren, X. Du and J. Lee: TW-SMNet: Deep Multitask Learning of Tele-Wide Stereo Matching. arXiv:1906.04463 2019.
303 SDM 9.41 % 24.75 % 11.96 % 62.56 % 1 min 1 core @ 2.5 Ghz (C/C++)
J. Kostkova: Stratified dense matching for stereopsis in complex scenes. BMVC 2003.
304 SGM&FlowFie+
This method uses optical flow information.
11.93 % 20.57 % 13.37 % 81.24 % 29 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, C. Bailer, O. Wasenmüller and D. Stricker: Combining Stereo Disparity and Optical Flow for Basic Scene Flow. Commercial Vehicle Technology Symposium (CVTS) 2018.
305 GCSF
This method uses optical flow information.
code 11.64 % 27.11 % 14.21 % 100.00 % 2.4 s 1 core @ 2.5 Ghz (C/C++)
J. Cech, J. Sanchez-Riera and R. Horaud: Scene Flow Estimation by growing Correspondence Seeds. CVPR 2011.
306 MT-TW-SMNet 15.47 % 16.25 % 15.60 % 100.00 % 0.4s GPU @ 2.5 Ghz (Python)
M. El-Khamy, X. Du, H. Ren and J. Lee: Multi-Task Learning of Depth from Tele and Wide Stereo Image Pairs. Proceedings of the IEEE Conference on Image Processing 2019.
307 Mono-SF
This method uses optical flow information.
14.21 % 26.94 % 16.32 % 100.00 % 41 s 1 core @ 3.5 Ghz (Matlab + C/C++)
F. Brickwedde, S. Abraham and R. Mester: Mono-SF: Multi-View Geometry meets Single-View Depth for Monocular Scene Flow Estimation of Dynamic Traffic Scenes. Proc. of International Conference on Computer Vision (ICCV) 2019.
308 CostFilter code 17.53 % 22.88 % 18.42 % 100.00 % 4 min 1 core @ 2.5 Ghz (Matlab)
C. Rhemann, A. Hosni, M. Bleyer, C. Rother and M. Gelautz: Fast Cost-Volume Filtering for Visual Correspondence and Beyond. CVPR 2011.
309 MonoComb
This method uses optical flow information.
17.89 % 21.16 % 18.44 % 100.00 % 0.58 s RTX 2080 Ti
R. Schuster, C. Unger and D. Stricker: MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow. ACM Computer Science in Cars Symposium (CSCS) 2020.
310 DWBSF
This method uses optical flow information.
19.61 % 22.69 % 20.12 % 100.00 % 7 min 4 cores @ 3.5 Ghz (C/C++)
C. Richardt, H. Kim, L. Valgaerts and C. Theobalt: Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. 3DV 2016.
311 RAFT-MSF
This method uses optical flow information.
18.10 % 36.82 % 21.21 % 100.00 % 0.18 s GPU @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
312 monoResMatch code 22.10 % 19.81 % 21.72 % 100.00 % 0.16 s Titan X GPU
F. Tosi, F. Aleotti, M. Poggi and S. Mattoccia: Learning monocular depth estimation infusing traditional stereo knowledge. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
313 Self-Mono-SF-ft
This method uses optical flow information.
code 20.72 % 29.41 % 22.16 % 100.00 % 0.09 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Monocular Scene Flow Estimation. CVPR 2020.
314 Multi-Mono-SF-ft
This method uses optical flow information.
This method makes use of multiple (>2) views.
code 21.60 % 28.22 % 22.71 % 100.00 % 0.06 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Multi-Frame Monocular Scene Flow. CVPR 2021.
315 OCV-BM code 24.29 % 30.13 % 25.27 % 58.54 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
G. Bradski: The OpenCV Library. Dr. Dobb's Journal of Software Tools 2000.
316 VSF
This method uses optical flow information.
code 27.31 % 21.72 % 26.38 % 100.00 % 125 min 1 core @ 2.5 Ghz (C/C++)
F. Huguet and F. Devernay: A Variational Method for Scene Flow Estimation from Stereo Sequences. ICCV 2007.
317 SED code 25.01 % 40.43 % 27.58 % 4.02 % 0.68 s 1 core @ 2.0 Ghz (C/C++)
D. Pe\~{n}a and A. Sutherland: Disparity Estimation by Simultaneous Edge Drawing. Computer Vision -- ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 2017.
318 Multi-Mono-SF
This method uses optical flow information.
This method makes use of multiple (>2) views.
code 27.48 % 47.30 % 30.78 % 100.00 % 0.06 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Multi-Frame Monocular Scene Flow. CVPR 2021.
319 mts1 code 28.03 % 46.55 % 31.11 % 2.52 % 0.18 s 4 cores @ 3.5 Ghz (C/C++)
R. Brandt, N. Strisciuglio, N. Petkov and M. Wilkinson: Efficient binocular stereo correspondence matching with 1-D Max-Trees. Pattern Recognition Letters 2020.
320 Self-Mono-SF
This method uses optical flow information.
code 31.22 % 48.04 % 34.02 % 100.00 % 0.09 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Monocular Scene Flow Estimation. CVPR 2020.
321 MST code 45.83 % 38.22 % 44.57 % 100.00 % 7 s 1 core @ 2.5 Ghz (Matlab + C/C++)
Q. Yang: A Non-Local Cost Aggregation Method for Stereo Matching. CVPR 2012.
322 Stereo-RSSF
This method uses optical flow information.
56.60 % 73.05 % 59.34 % 9.26 % 2.5 s 8 core @ 2.5 Ghz (Matlab)
A. Erfan salehi and R. hoseuni: Stereo-RSSF: Stereo Robust Sparse Scene-Flow Estimation. 2023.
Table as LaTeX | Only published Methods




Related Datasets

  • HCI/Bosch Robust Vision Challenge: Optical flow and stereo vision challenge on high resolution imagery recorded at a high frame rate under diverse weather conditions (e.g., sunny, cloudy, rainy). The Robert Bosch AG provides a prize for the best performing method.
  • Image Sequence Analysis Test Site (EISATS): Synthetic image sequences with ground truth information provided by UoA and Daimler AG. Some of the images come with 3D range sensor information.
  • Middlebury Stereo Evaluation: The classic stereo evaluation benchmark, featuring four test images in version 2 of the benchmark, with very accurate ground truth from a structured light system. 38 image pairs are provided in total.
  • Daimler Stereo Dataset: Stereo bad weather highway scenes with partial ground truth for freespace
  • Make3D Range Image Data: Images with small-resolution ground truth used to learn and evaluate depth from single monocular images.
  • Lubor Ladicky's Stereo Dataset: Stereo Images with manually labeled ground truth based on polygonal areas.
  • Middlebury Optical Flow Evaluation: The classic optical flow evaluation benchmark, featuring eight test images, with very accurate ground truth from a shape from UV light pattern system. 24 image pairs are provided in total.

Citation

When using this dataset in your research, we will be happy if you cite us:
@ARTICLE{Menze2018JPRS,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Object Scene Flow},
  journal = {ISPRS Journal of Photogrammetry and Remote Sensing (JPRS)},
  year = {2018}
}
@INPROCEEDINGS{Menze2015ISA,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Joint 3D Estimation of Vehicles and Scene Flow},
  booktitle = {ISPRS Workshop on Image Sequence Analysis (ISA)},
  year = {2015}
}



eXTReMe Tracker