handong15872023-11-22T02:04:48+00:00https://handong1587.github.iohandong1587handong1587@gmail.comBEV2022-06-27T00:00:00+00:00https://handong1587.github.io/deep_learning/2022/06/27/bev<h1 id="papers">Papers</h1>
<p><strong>Vision-Centric BEV Perception: A Survey</strong></p>
<ul>
<li>arxiv: <a href="https://arxiv.org/abs/2208.02797">https://arxiv.org/abs/2208.02797</a></li>
<li>github: <a href="https://github.com/4DVLab/Vision-Centric-BEV-Perception">https://github.com/4DVLab/Vision-Centric-BEV-Perception</a></li>
</ul>
<h1 id="multi-camera-3d-object-detection">Multi-Camera 3D Object Detection</h1>
<p><strong>Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D</strong></p>
<ul>
<li>intro: ECCV 2020</li>
<li>intro: NVIDIA, Vector Institute, University of Toronto</li>
<li>project page: <a href="https://nv-tlabs.github.io/lift-splat-shoot/">https://nv-tlabs.github.io/lift-splat-shoot/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2008.05711">https://arxiv.org/abs/2008.05711</a></li>
<li>github: <a href="https://github.com/nv-tlabs/lift-splat-shoot">https://github.com/nv-tlabs/lift-splat-shoot</a></li>
</ul>
<p><strong>BEVDet: High-Performance Multi-Camera 3D Object Detection in Bird-Eye-View</strong></p>
<ul>
<li>intro: PhiGent Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2112.11790">https://arxiv.org/abs/2112.11790</a></li>
</ul>
<p><strong>BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection</strong></p>
<ul>
<li>intro: PhiGent Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.17054">https://arxiv.org/abs/2203.17054</a></li>
</ul>
<p><strong>BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving</strong></p>
<ul>
<li>intro: Tsinghua University & PhiGent Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2205.09743">https://arxiv.org/abs/2205.09743</a></li>
<li>github: <a href="https://github.com/zhangyp15/BEVerse">https://github.com/zhangyp15/BEVerse</a></li>
</ul>
<p><strong>BEVFormer: Learning Bird’s-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers</strong></p>
<ul>
<li>intro: Nanjing University & Shanghai AI Laboratory & The University of Hong Kong</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.17270">https://arxiv.org/abs/2203.17270</a></li>
<li>github: <a href="https://github.com/zhiqi-li/BEVFormer">https://github.com/zhiqi-li/BEVFormer</a></li>
</ul>
<p><strong>HFT: Lifting Perspective Representations via Hybrid Feature Transformation</strong></p>
<ul>
<li>intro: Institute of Automation, Chinese Academy of Sciences & PhiGent Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2204.05068">https://arxiv.org/abs/2204.05068</a></li>
<li>github: <a href="https://github.com/JiayuZou2020/HFT">https://github.com/JiayuZou2020/HFT</a></li>
</ul>
<p><strong>M^2BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation</strong></p>
<ul>
<li>project page: <a href="https://xieenze.github.io/projects/m2bev/">https://xieenze.github.io/projects/m2bev/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2204.05088">https://arxiv.org/abs/2204.05088</a></li>
</ul>
<p><strong>BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation</strong></p>
<ul>
<li>project page: <a href="https://bevfusion.mit.edu/">https://bevfusion.mit.edu/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2205.13542">https://arxiv.org/abs/2205.13542</a></li>
<li>github: <a href="https://github.com/mit-han-lab/bevfusion">https://github.com/mit-han-lab/bevfusion</a></li>
</ul>
<p><strong>BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework</strong></p>
<ul>
<li>intro: Peking University & Alibaba Group</li>
<li>arxiv: <a href="https://arxiv.org/abs/2205.13790">https://arxiv.org/abs/2205.13790</a></li>
<li>github: <a href="https://github.com/ADLab-AutoDrive/BEVFusion">https://github.com/ADLab-AutoDrive/BEVFusion</a></li>
</ul>
<p><strong>A Simple Baseline for BEV Perception Without LiDAR</strong></p>
<ul>
<li>intro: Carnegie Mellon University & Toyota Research Institute</li>
<li>project page: <a href="http://www.cs.cmu.edu/~aharley/bev/">http://www.cs.cmu.edu/~aharley/bev/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.07959">https://arxiv.org/abs/2206.07959</a></li>
</ul>
<p><strong>BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection</strong></p>
<ul>
<li>intro: Megvii Inc. (Face++) & Huazhong University of Science and Technology & Xi’an Jiaotong University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.10092">https://arxiv.org/abs/2206.10092</a></li>
</ul>
<p><strong>PolarFormer: Multi-camera 3D Object Detection with Polar Transformers</strong></p>
<ul>
<li>intro: 1Fudan University & CASIA & Alibaba DAMO Academy & University of Surrey</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.15398">https://arxiv.org/abs/2206.15398</a></li>
<li>github: <a href="https://github.com/fudan-zvg/PolarFormer">https://github.com/fudan-zvg/PolarFormer</a></li>
</ul>
<p><strong>ORA3D: Overlap Region Aware Multi-view 3D Object Detection</strong></p>
<ul>
<li>intro: Korea University & KAIST & Hyundai Motor Company R&D Division</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.00865">https://arxiv.org/abs/2207.00865</a></li>
</ul>
<p><strong>MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection</strong></p>
<ul>
<li>intro: Fudan University & Meituan</li>
<li>arxiv: <a href="https://arxiv.org/abs/2209.03102">https://arxiv.org/abs/2209.03102</a></li>
</ul>
<h1 id="hd-map-construction">HD Map Construction</h1>
<p><strong>HDMapNet: An Online HD Map Construction and Evaluation Framework</strong></p>
<ul>
<li>intro: ICRA 2022</li>
<li>intro: Tsinghua University & MIT & Li Auto</li>
<li>project page: <a href="https://tsinghua-mars-lab.github.io/HDMapNet/">https://tsinghua-mars-lab.github.io/HDMapNet/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2107.06307">https://arxiv.org/abs/2107.06307</a></li>
<li>github: <a href="https://github.com/Tsinghua-MARS-Lab/HDMapNet">https://github.com/Tsinghua-MARS-Lab/HDMapNet</a></li>
</ul>
<p><strong>VectorMapNet: End-to-end Vectorized HD Map Learning</strong></p>
<ul>
<li>intro: Tsinghua University & MIT & Li Auto</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.08920">https://arxiv.org/abs/2206.08920</a></li>
</ul>
<p><strong>UniFormer: Unified Multi-view Fusion Transformer for Spatial-Temporal Representation in Bird’s-Eye-View</strong></p>
<ul>
<li>intro: Zhejiang University & DJI & Shanghai AI Lab</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.08536">https://arxiv.org/abs/2207.08536</a></li>
</ul>
<p><strong>MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction</strong></p>
<ul>
<li>intro: University of Science & Technology, Horizon Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2208.14437">https://arxiv.org/abs/2208.14437</a></li>
<li>gihtub: <a href="https://github.com/hustvl/MapTR">https://github.com/hustvl/MapTR</a></li>
</ul>
<h1 id="semantic-segmentation">Semantic Segmentation</h1>
<p><strong>LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation</strong></p>
<ul>
<li>intro: Valeo.ai & Inria</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.13294">https://arxiv.org/abs/2206.13294</a></li>
</ul>
<p><strong>CoBEVT: Cooperative Bird’s Eye View Semantic Segmentation with Sparse Transformers</strong></p>
<ul>
<li>intro: University of California, Los Angeles & University of Texas at Austin & University of California</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.02202">https://arxiv.org/abs/2207.02202</a></li>
</ul>
3D2021-07-28T00:00:00+00:00https://handong1587.github.io/deep_learning/2021/07/28/3d<h1 id="papers">Papers</h1>
<p><strong>Expressive Body Capture: 3D Hands, Face, and Body from a Single Image</strong></p>
<ul>
<li>intro: CVPR 2019</li>
<li>arxiv: <a href="https://arxiv.org/abs/1904.05866">https://arxiv.org/abs/1904.05866</a></li>
<li>project page: <a href="https://smpl-x.is.tue.mpg.de/">https://smpl-x.is.tue.mpg.de/</a></li>
<li>github: <a href="https://github.com/vchoutas/smplify-x">https://github.com/vchoutas/smplify-x</a></li>
</ul>
<p><strong>Collaborative Regression of Expressive Bodies using Moderation</strong></p>
<ul>
<li>intro: PIXIE</li>
<li>project page: <a href="https://pixie.is.tue.mpg.de/">https://pixie.is.tue.mpg.de/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2105.05301">https://arxiv.org/abs/2105.05301</a></li>
<li>github: <a href="https://github.com/YadiraF/PIXIE">https://github.com/YadiraF/PIXIE</a></li>
</ul>
<p><strong>Hand Image Understanding via Deep Multi-Task Learning</strong></p>
<ul>
<li>intro: ICCV 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2107.11646">https://arxiv.org/abs/2107.11646</a></li>
</ul>
<p><strong>VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild</strong></p>
<p><a href="https://arxiv.org/abs/2108.02452">https://arxiv.org/abs/2108.02452</a></p>
<p><strong>EventHPE: Event-based 3D Human Pose and Shape Estimation</strong></p>
<ul>
<li>intro: ICCV 2021</li>
<li>intro: University of Alberta & Shandong University & Celepixel Technology & University of Guelph & Nanyang Technological University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2108.06819">https://arxiv.org/abs/2108.06819</a></li>
</ul>
<h1 id="monocular-3d-object-detection">Monocular 3D Object Detection</h1>
<p><strong>Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss</strong></p>
<ul>
<li>keywords: SS3D</li>
<li>arxiv: <a href="https://arxiv.org/abs/1906.08070">https://arxiv.org/abs/1906.08070</a></li>
<li>video: <a href="https://www.youtube.com/playlist?list=PL4jJwJr7UjMb4bzLwUGHdVmhfNS2Ads_x">https://www.youtube.com/playlist?list=PL4jJwJr7UjMb4bzLwUGHdVmhfNS2Ads_x</a></li>
</ul>
<p><strong>M3D-RPN: Monocular 3D Region Proposal Network for Object Detection</strong></p>
<ul>
<li>intro: ICCV 2019 oral</li>
<li>project page: <a href="http://cvlab.cse.msu.edu/project-m3d-rpn.html">http://cvlab.cse.msu.edu/project-m3d-rpn.html</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/1907.06038">https://arxiv.org/abs/1907.06038</a></li>
<li>github: <a href="https://github.com/garrickbrazil/M3D-RPN">https://github.com/garrickbrazil/M3D-RPN</a></li>
</ul>
<p><strong>Learning Depth-Guided Convolutions for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2020</li>
<li>arxiv: <a href="https://arxiv.org/abs/1912.04799">https://arxiv.org/abs/1912.04799</a></li>
<li>github: <a href="https://github.com/dingmyu/D4LCN">https://github.com/dingmyu/D4LCN</a></li>
</ul>
<p><strong>RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving</strong></p>
<ul>
<li>intro: ECCV 2020</li>
<li>arxiv: <a href="https://arxiv.org/abs/2001.03343">https://arxiv.org/abs/2001.03343</a></li>
</ul>
<p><strong>SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation</strong></p>
<ul>
<li>intro: CVPR 2020</li>
<li>intro: ZongMu Tech & TU/e</li>
<li>arxiv: <a href="https://arxiv.org/abs/2002.10111">https://arxiv.org/abs/2002.10111</a></li>
<li>github(official): <a href="https://github.com/lzccccc/SMOKE">https://github.com/lzccccc/SMOKE</a></li>
</ul>
<p><strong>Center3D: Center-based Monocular 3D Object Detection with Joint Depth Understanding</strong></p>
<ul>
<li>keywords: one-stage anchor-free</li>
<li>arxiv: <a href="https://arxiv.org/abs/2005.13423">https://arxiv.org/abs/2005.13423</a></li>
</ul>
<p><strong>Monocular Differentiable Rendering for Self-Supervised 3D Object Detection</strong></p>
<ul>
<li>intro: ECCV 2020</li>
<li>intro: Preferred Networks, Inc & Toyota Research Institute</li>
<li>arxiv: <a href="https://arxiv.org/abs/2009.14524">https://arxiv.org/abs/2009.14524</a></li>
</ul>
<p><strong>M3DSSD: Monocular 3D Single Stage Object Detector</strong></p>
<ul>
<li>intro: CVPR 2021</li>
<li>intro: Zhejiang University & Mohamed bin Zayed University of Artificial Intelligence & Inception Institute of Artificial Intelligence</li>
<li>arxiv: <a href="https://arxiv.org/abs/2103.13164">https://arxiv.org/abs/2103.13164</a></li>
</ul>
<p><strong>Delving into Localization Errors for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2103.16237">https://arxiv.org/abs/2103.16237</a></li>
<li>github: <a href="https://github.com/xinzhuma/monodle">https://github.com/xinzhuma/monodle</a></li>
</ul>
<p><strong>Depth-conditioned Dynamic Message Propagation for Monocular 3D Object Detection</strong></p>
<ul>
<li>github: CVPR 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2103.16470">https://arxiv.org/abs/2103.16470</a></li>
<li>github: <a href="https://github.com/fudan-zvg/DDMP">https://github.com/fudan-zvg/DDMP</a></li>
</ul>
<p><strong>GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2103.17202">https://arxiv.org/abs/2103.17202</a></li>
</ul>
<p><strong>Objects are Different: Flexible Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2104.02323">https://arxiv.org/abs/2104.02323</a></li>
<li>github: <a href="https://github.com/zhangyp15/MonoFlex">https://github.com/zhangyp15/MonoFlex</a></li>
</ul>
<p><strong>Geometry-based Distance Decomposition for Monocular 3D Object Detection</strong></p>
<p><a href="https://arxiv.org/abs/2104.03775">https://arxiv.org/abs/2104.03775</a></p>
<p><strong>Geometry-aware data augmentation for monocular 3D object detection</strong></p>
<p><a href="https://arxiv.org/abs/2104.05858">https://arxiv.org/abs/2104.05858</a></p>
<p><strong>OCM3D: Object-Centric Monocular 3D Object Detection</strong></p>
<p><a href="https://arxiv.org/abs/2104.06041">https://arxiv.org/abs/2104.06041</a></p>
<p><strong>Exploring 2D Data Augmentation for 3D Monocular Object Detection</strong></p>
<p><a href="https://arxiv.org/abs/2104.10786">https://arxiv.org/abs/2104.10786</a></p>
<p><strong>Progressive Coordinate Transforms for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: Fudan University & Amazon Inc.</li>
<li>arxiv: <a href="https://arxiv.org/abs/2108.05793">https://arxiv.org/abs/2108.05793</a></li>
<li>github: <a href="https://github.com/amazon-research/progressive-coordinate-transforms">https://github.com/amazon-research/progressive-coordinate-transforms</a></li>
</ul>
<p><strong>AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ICCV 2021</li>
<li>intro: Baidu Research</li>
<li>arxiv: <a href="https://arxiv.org/abs/2108.11127">https://arxiv.org/abs/2108.11127</a></li>
<li>github: <a href="https://github.com/zongdai/AutoShape">https://github.com/zongdai/AutoShape</a></li>
</ul>
<p><strong>Categorical Depth Distribution Network for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2021 oral</li>
<li>intro: University of Toronto Robotics Institute</li>
<li>project page: <a href="https://trailab.github.io/CaDDN/">https://trailab.github.io/CaDDN/</a></li>
<li>arxiv: <a href="https://arxiv.org/abs/2103.01100">https://arxiv.org/abs/2103.01100</a></li>
<li>github: <a href="https://github.com/TRAILab/CaDDN">https://github.com/TRAILab/CaDDN</a></li>
</ul>
<p><strong>The Devil is in the Task: Exploiting Reciprocal Appearance-Localization Features for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ICCV 2021</li>
<li>arxiv: <a href="https://arxiv.org/abs/2112.14023">https://arxiv.org/abs/2112.14023</a></li>
</ul>
<p><strong>SGM3D: Stereo Guided Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: Fudan University & Baidu Inc.</li>
<li>arxiv: <a href="https://arxiv.org/abs/2112.01914">https://arxiv.org/abs/2112.01914</a></li>
<li>github: <a href="https://github.com/zhouzheyuan/sgm3d">https://github.com/zhouzheyuan/sgm3d</a></li>
</ul>
<p><strong>MonoDistill: Learning Spatial Features for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ICLR 2022</li>
<li>intro: Dalian University of Technology & The University of Sydney</li>
<li>arxiv: <a href="https://arxiv.org/abs/2201.10830">https://arxiv.org/abs/2201.10830</a></li>
<li>github: <a href="https://github.com/monster-ghost/MonoDistill">https://github.com/monster-ghost/MonoDistill</a></li>
</ul>
<p><strong>Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.02112">https://arxiv.org/abs/2203.02112</a></li>
<li>github:<a href="https://github.com/revisitq/Pseudo-Stereo-3D">https://github.com/revisitq/Pseudo-Stereo-3D</a></li>
</ul>
<p><strong>MonoJSG: Joint Semantic and Geometric Cost Volume for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>intro: The Hong Kong University of Science and Technology & DJI</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.08563">https://arxiv.org/abs/2203.08563</a></li>
</ul>
<p><strong>MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>intro: National Taiwan University & Mobile Drive Technology</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.10981">https://arxiv.org/abs/2203.10981</a></li>
<li>github: <a href="https://github.com/kuanchihhuang/MonoDTR">https://github.com/kuanchihhuang/MonoDTR</a></li>
</ul>
<p><strong>MonoDETR: Depth-aware Transformer for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: Shanghai AI Laboratory & Peking University & The Chinese University of Hong Kong</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.13310">https://arxiv.org/abs/2203.13310</a></li>
</ul>
<p><strong>Homography Loss for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>arxiv: <a href="https://arxiv.org/abs/2204.00754">https://arxiv.org/abs/2204.00754</a></li>
</ul>
<p><strong>Towards Model Generalization for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: Harbin Institute of Technology & University of Science and Technology of China & SenseTime Research</li>
<li>arxiv: <a href="https://arxiv.org/abs/2205.11664">https://arxiv.org/abs/2205.11664</a></li>
</ul>
<p><strong>Delving into the Pre-training Paradigm of Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: Tsinghua University & Huazhong University of Science and Technology</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.03657">https://arxiv.org/abs/2206.03657</a></li>
</ul>
<p><strong>MonoGround: Detecting Monocular 3D Objects from the Ground</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.07372">https://arxiv.org/abs/2206.07372</a></li>
<li>github: <a href="https://github.com/cfzd/MonoGround">https://github.com/cfzd/MonoGround</a></li>
</ul>
<p><strong>Densely Constrained Depth Estimator for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ECCV 2022</li>
<li>intro: CASIA & UCAS & HKISI CAS</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.10047">https://arxiv.org/abs/2207.10047</a></li>
<li>github:<a href="https://github.com/BraveGroup/DCD">https://github.com/BraveGroup/DCD</a></li>
</ul>
<p><strong>Consistency of Implicit and Explicit Features Matters for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: DiDi</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.07933">https://arxiv.org/abs/2207.07933</a></li>
</ul>
<p><strong>DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ECCV 2022</li>
<li>intro: Zhejiang University & Fabu Inc.</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.08531">https://arxiv.org/abs/2207.08531</a></li>
<li>github: <a href="https://github.com/SPengLiang/DID-M3D">https://github.com/SPengLiang/DID-M3D</a></li>
</ul>
<p><strong>DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection</strong></p>
<ul>
<li>intro: ECCV 2022</li>
<li>intro: Michigan State University & Meta AI & Ford Motor Company</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.10758">https://arxiv.org/abs/2207.10758</a></li>
<li>github: <a href="https://github.com/abhi1kumar/DEVIANT">https://github.com/abhi1kumar/DEVIANT</a></li>
</ul>
<p><strong>Monocular 3D Object Detection with Depth from Motion</strong></p>
<ul>
<li>intro: ECCV 2022 Oral</li>
<li>intro: The Chinese University of Hong Kong & Shanghai AI Laboratory</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.12988">https://arxiv.org/abs/2207.12988</a></li>
<li>github: <a href="https://github.com/Tai-Wang/Depth-from-Motion">https://github.com/Tai-Wang/Depth-from-Motion</a></li>
</ul>
<p><strong>MV-FCOS3D++: Multi-View Camera-Only 4D Object Detection with Pretrained Monocular Backbones</strong></p>
<ul>
<li>intro: The Chinese University of Hong Kong & Hong Kong University of Science and Technology & The Chinese University of Hong Kong & 4Nanyang Technological University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.12716">https://arxiv.org/abs/2207.12716</a></li>
<li>github: <a href="https://github.com/Tai-Wang/Depth-from-Motion">https://github.com/Tai-Wang/Depth-from-Motion</a></li>
</ul>
<p><strong>SEFormer: Structure Embedding Transformer for 3D Object Detection</strong></p>
<ul>
<li>intro: Tsinghua University & Australian National University & National University of Singapore</li>
<li>arxiv: <a href="https://arxiv.org/abs/2209.01745">https://arxiv.org/abs/2209.01745</a></li>
</ul>
<h1 id="multi-modal-3d-object-detection">Multi-Modal 3D Object Detection</h1>
<p><strong>AutoAlign: Pixel-Instance Feature Aggregation for Multi-Modal 3D Object Detection</strong></p>
<ul>
<li>intro: IJCAI 2022</li>
<li>intro: University of Science and Technology & Harbin Institute of Technology & SenseTime Research & The Chinese University of Hong Kong & Tsinghua University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2201.06493">https://arxiv.org/abs/2201.06493</a></li>
</ul>
<p><strong>AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection</strong></p>
<ul>
<li>intro: ECCV 2022</li>
<li>intro: University of Science and Technology of China & Harbin Institute of Technology & SenseTime Research</li>
<li>arxiv: <a href="https://arxiv.org/abs/2207.10316">https://arxiv.org/abs/2207.10316</a></li>
<li>github: <a href="https://github.com/zehuichen123/AutoAlignV2">https://github.com/zehuichen123/AutoAlignV2</a></li>
</ul>
<h1 id="monocular-3d-detection-and-tracking">Monocular 3D Detection and Tracking</h1>
<p><strong>Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving</strong></p>
<ul>
<li>intro: CVPR 2022</li>
<li>intro: PP-CEM & Rising Auto</li>
<li>arxiv: <a href="https://arxiv.org/abs/2205.14882">https://arxiv.org/abs/2205.14882</a></li>
</ul>
<p><strong>Depth Estimation Matters Most: Improving Per-Object Depth Estimation for Monocular 3D Detection and Tracking</strong></p>
<ul>
<li>intro: Waymo LLC & Johns Hopkins University & Cornell University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.03666">https://arxiv.org/abs/2206.03666</a></li>
</ul>
<h1 id="multi-camera-3d-object-detection">Multi-Camera 3D Object Detection</h1>
<p><strong>PETR: Position Embedding Transformation for Multi-View 3D Object Detection</strong></p>
<ul>
<li>intro: MEGVII Technology</li>
<li>arxiv: <a href="https://arxiv.org/abs/2203.05625">https://arxiv.org/abs/2203.05625</a></li>
</ul>
<p><strong>PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images</strong></p>
<ul>
<li>intro: MEGVII Technology</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.01256">https://arxiv.org/abs/2206.01256</a></li>
</ul>
<h2 id="sparse4d">Sparse4D</h2>
<p><strong>Sparse4D: Multi-view 3D Object Detection with Sparse Spatial-Temporal Fusion</strong></p>
<ul>
<li>intro: Horizon Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2211.10581">https://arxiv.org/abs/2211.10581</a></li>
<li>github: <a href="https://github.com/linxuewu/Sparse4D">https://github.com/linxuewu/Sparse4D</a></li>
</ul>
<p><strong>Sparse4D v2: Recurrent Temporal Fusion with Sparse Model</strong></p>
<ul>
<li>intro: Horizon Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2305.14018">https://arxiv.org/abs/2305.14018</a></li>
<li>github: <a href="https://github.com/linxuewu/Sparse4D">https://github.com/linxuewu/Sparse4D</a></li>
</ul>
<p><strong>Sparse4D v3: Advancing End-to-End 3D Detection and Tracking</strong></p>
<ul>
<li>intro: Horizon Robotics</li>
<li>arxiv: <a href="https://arxiv.org/abs/2311.11722">https://arxiv.org/abs/2311.11722</a></li>
<li>github: <a href="https://github.com/linxuewu/Sparse4D">https://github.com/linxuewu/Sparse4D</a></li>
</ul>
<h1 id="multi-camera-multiple-3d-object-tracking">Multi-Camera Multiple 3D Object Tracking</h1>
<p><strong>Multi-Camera Multiple 3D Object Tracking on the Move for Autonomous Vehicles</strong></p>
<ul>
<li>intro: CVPR Workshop 2022</li>
<li>arxiv: <a href="https://arxiv.org/abs/2204.09151">https://arxiv.org/abs/2204.09151</a></li>
</ul>
<p><strong>SRCN3D: Sparse R-CNN 3D Surround-View Camera Object Detection and Tracking for Autonomous Driving</strong></p>
<ul>
<li>intro: Tsinghua University</li>
<li>arxiv: <a href="https://arxiv.org/abs/2206.14451">https://arxiv.org/abs/2206.14451</a></li>
<li>github: <a href="https://github.com/synsin0/SRCN3D">https://github.com/synsin0/SRCN3D</a></li>
</ul>
Study Resources2018-04-18T00:00:00+00:00https://handong1587.github.io/study/2018/04/18/resources<p><strong>draw.io</strong></p>
<ul>
<li>intro: an app to create diagrams. You can use it online, download it or add it to Android and iOS for free</li>
<li>homepage: <a href="https://www.draw.io/">https://www.draw.io/</a></li>
</ul>
Keep Up With New Trends2017-12-18T00:00:00+00:00https://handong1587.github.io/deep_learning/2017/12/18/keep-up-with-new-trends<p><strong>ComputerVisionFoundation Videos</strong></p>
<p><a href="https://www.youtube.com/channel/UC0n76gicaarsN_Y9YShWwhw/playlists">https://www.youtube.com/channel/UC0n76gicaarsN_Y9YShWwhw/playlists</a></p>
<h1 id="eccv-2018">ECCV 2018</h1>
<p><strong>ECCV 2018 papers</strong></p>
<p><a href="http://openaccess.thecvf.com/ECCV2018.py">http://openaccess.thecvf.com/ECCV2018.py</a></p>
<h1 id="icml-2018">ICML 2018</h1>
<p><strong>DeepMind papers at ICML 2018</strong></p>
<p><strong>Facebook Research at ICML 2018</strong></p>
<p><a href="https://research.fb.com/facebook-research-at-icml-2018/">https://research.fb.com/facebook-research-at-icml-2018/</a></p>
<p><strong>ICML 2018 Notes</strong></p>
<ul>
<li>day1: <a href="https://gmarti.gitlab.io/ml/2018/07/10/icml18-tutorials.html">https://gmarti.gitlab.io/ml/2018/07/10/icml18-tutorials.html</a></li>
<li>day2: <a href="https://gmarti.gitlab.io/ml/2018/07/11/icml18-day-2.html">https://gmarti.gitlab.io/ml/2018/07/11/icml18-day-2.html</a></li>
<li>day3: <a href="https://gmarti.gitlab.io/ml/2018/07/12/icml18-day-3.html">https://gmarti.gitlab.io/ml/2018/07/12/icml18-day-3.html</a></li>
<li>day4: <a href="https://gmarti.gitlab.io/ml/2018/07/13/icml18-day-4.html">https://gmarti.gitlab.io/ml/2018/07/13/icml18-day-4.html</a></li>
</ul>
<p><strong>ICML 2018 Notes</strong></p>
<ul>
<li>notes: <a href="https://david-abel.github.io/blog/posts/misc/icml_2018.pdf">https://david-abel.github.io/blog/posts/misc/icml_2018.pdf</a></li>
<li>github: <a href="https://david-abel.github.io/">https://david-abel.github.io/</a></li>
</ul>
<h1 id="ijcai-2018">IJCAI 2018</h1>
<p><strong>Proceedings of IJCAI 2018</strong></p>
<p><a href="https://www.ijcai.org/proceedings/2018/">https://www.ijcai.org/proceedings/2018/</a></p>
<h1 id="cvpr-2018">CVPR 2018</h1>
<p><strong>CVPR 2018 open access</strong></p>
<p><a href="http://openaccess.thecvf.com/CVPR2018.py">http://openaccess.thecvf.com/CVPR2018.py</a></p>
<p><strong>CVPR18: Tutorials</strong></p>
<ul>
<li>youtube: <a href="https://www.youtube.com/playlist?list=PL_bDvITUYucD54Ym5XKGqTv9xNsrOX0aS">https://www.youtube.com/playlist?list=PL_bDvITUYucD54Ym5XKGqTv9xNsrOX0aS</a></li>
<li>bilibili: <a href="https://www.bilibili.com/video/av27038992/">https://www.bilibili.com/video/av27038992/</a></li>
</ul>
<h1 id="valse-2018">VALSE 2018</h1>
<p><a href="http://ice.dlut.edu.cn/valse2018/programs.html">http://ice.dlut.edu.cn/valse2018/programs.html</a></p>
<h1 id="nips-2017">NIPS 2017</h1>
<p><strong>NIPS 2017 Spotlights</strong></p>
<ul>
<li>youtube: <a href="https://www.youtube.com/playlist?list=PLbVjlVq6hjK89WtlGHdC_PNwcawrzht5S">https://www.youtube.com/playlist?list=PLbVjlVq6hjK89WtlGHdC_PNwcawrzht5S</a></li>
</ul>
<p><strong>NIPS 2017 — notes and thoughs</strong></p>
<p><a href="https://olgalitech.wordpress.com/2017/12/12/nips-2017-notes-and-thoughs/">https://olgalitech.wordpress.com/2017/12/12/nips-2017-notes-and-thoughs/</a></p>
<p><strong>NIPS 2017 Notes</strong></p>
<ul>
<li>notes: <a href="https://cs.brown.edu/~dabel/blog/posts/misc/nips_2017.pdf">https://cs.brown.edu/~dabel/blog/posts/misc/nips_2017.pdf</a></li>
<li>blog: <a href="https://cs.brown.edu/~dabel/blog.html">https://cs.brown.edu/~dabel/blog.html</a></li>
</ul>
<p><strong>NIPS 2017</strong></p>
<ul>
<li>intro: A list of resources for all invited talks, tutorials, workshops and presentations at NIPS 2017</li>
<li>github: <a href="https://github.com//hindupuravinash/nips2017">https://github.com//hindupuravinash/nips2017</a></li>
</ul>
<p><strong>Global NIPS 2017 Paper Implementation Challenge</strong></p>
<ul>
<li>intro: 8th December 2017 - 31st January 2018 (Application closed)</li>
<li>homepage: <a href="https://nurture.ai/nips-challenge">https://nurture.ai/nips-challenge</a></li>
</ul>
<h1 id="iccv-2017">ICCV 2017</h1>
<p><strong>ICCV 2017 open access</strong></p>
<p><a href="http://openaccess.thecvf.com/ICCV2017.py">http://openaccess.thecvf.com/ICCV2017.py</a></p>
<p><strong>ICCV 2017 Workshops, Venice Italy</strong></p>
<p><a href="http://openaccess.thecvf.com/ICCV2017_workshops/menu.py">http://openaccess.thecvf.com/ICCV2017_workshops/menu.py</a></p>
<p><strong>ICCV17 Tutorials</strong></p>
<p><a href="https://www.youtube.com/playlist?list=PL_bDvITUYucBGj2Hmv1e7CP9U82kHWVOT">https://www.youtube.com/playlist?list=PL_bDvITUYucBGj2Hmv1e7CP9U82kHWVOT</a></p>
<p><strong>Facebook at ICCV 2017</strong></p>
<p><a href="https://research.fb.com/facebook-at-iccv-2017/">https://research.fb.com/facebook-at-iccv-2017/</a></p>
<p><strong>ICCV 2017 Tutorial on GANs</strong></p>
<ul>
<li>homepage: <a href="https://sites.google.com/view/iccv-2017-gans/schedule">https://sites.google.com/view/iccv-2017-gans/schedule</a></li>
<li>youtube: {https://www.youtube.com/playlist?list=PL_bDvITUYucDEzjMTgh1cgtTIODZe3prZ}(https://www.youtube.com/playlist?list=PL_bDvITUYucDEzjMTgh1cgtTIODZe3prZ)</li>
</ul>
<h1 id="ilsvrc-2017">ILSVRC 2017</h1>
<p><strong>Overview of ILSVRC 2017</strong></p>
<p><a href="http://image-net.org/challenges/talks_2017/ILSVRC2017_overview.pdf">http://image-net.org/challenges/talks_2017/ILSVRC2017_overview.pdf</a></p>
<p><strong>ImageNet: Where are we going? And where have we been?</strong></p>
<ul>
<li>intro: by Fei-Fei Li, Jia Deng</li>
<li>slides: <a href="http://image-net.org/challenges/talks_2017/imagenet_ilsvrc2017_v1.0.pdf">http://image-net.org/challenges/talks_2017/imagenet_ilsvrc2017_v1.0.pdf</a></li>
</ul>
<h1 id="deep-learning-and-reinforcement-learning-summer-school-2017">Deep Learning and Reinforcement Learning Summer School 2017</h1>
<ul>
<li>homepage: <a href="https://mila.umontreal.ca/en/cours/deep-learning-summer-school-2017/">https://mila.umontreal.ca/en/cours/deep-learning-summer-school-2017/</a></li>
<li>slides: <a href="https://mila.umontreal.ca/en/cours/deep-learning-summer-school-2017/slides/">https://mila.umontreal.ca/en/cours/deep-learning-summer-school-2017/slides/</a></li>
<li>mirror: <a href="https://pan.baidu.com/s/1eSvijvW#list/path=%2F">https://pan.baidu.com/s/1eSvijvW#list/path=%2F</a></li>
</ul>
<h1 id="iclr-2017">ICLR 2017</h1>
<p><strong>ICLR 2017 Videos</strong></p>
<p><a href="https://www.facebook.com/pg/iclr.cc/videos/">https://www.facebook.com/pg/iclr.cc/videos/</a></p>
<h1 id="cvpr-2017">CVPR 2017</h1>
<p><strong>CVPR 2017 open access</strong></p>
<p><a href="http://openaccess.thecvf.com/CVPR2017.py">http://openaccess.thecvf.com/CVPR2017.py</a></p>
<p><strong>CVPR 2017 Workshops, Honolulu Hawaii</strong></p>
<p><a href="http://openaccess.thecvf.com/CVPR2017_workshops/menu.py">http://openaccess.thecvf.com/CVPR2017_workshops/menu.py</a></p>
<h2 id="cvpr-2017-tutorial">CVPR 2017 Tutorial</h2>
<p><strong>CVPR’17 Tutorial: Deep Learning for Objects and Scenes</strong></p>
<p><a href="http://deeplearning.csail.mit.edu/">http://deeplearning.csail.mit.edu/</a></p>
<p><strong>Lecture 1: Learning Deep Representations for Visual Recognition</strong></p>
<ul>
<li>intro: by Kaiming He</li>
<li>slides: <a href="http://deeplearning.csail.mit.edu/cvpr2017_tutorial_kaiminghe.pdf">http://deeplearning.csail.mit.edu/cvpr2017_tutorial_kaiminghe.pdf</a></li>
<li>youtube: <a href="https://www.youtube.com/watch?v=jHv37mKAhV4">https://www.youtube.com/watch?v=jHv37mKAhV4</a></li>
</ul>
<p><strong>Lecture 2: Deep Learning for Instance-level Object Understanding</strong></p>
<ul>
<li>intro: by Ross Girshick</li>
<li>slides: <a href="http://deeplearning.csail.mit.edu/instance_ross.pdf">http://deeplearning.csail.mit.edu/instance_ross.pdf</a></li>
<li>youtube: <a href="https://www.youtube.com/watch?v=jHv37mKAhV4&feature=youtu.be&t=2349">https://www.youtube.com/watch?v=jHv37mKAhV4&feature=youtu.be&t=2349</a></li>
</ul>
<h1 id="nips-2016">NIPS 2016</h1>
<p><strong>NIPS 2016 Schedule</strong></p>
<p><a href="https://nips.cc/Conferences/2016/Schedule">https://nips.cc/Conferences/2016/Schedule</a></p>
<p><strong>DeepMind Papers @ NIPS (Part 1)</strong></p>
<p><a href="https://deepmind.com/blog/deepmind-papers-nips-part-1/">https://deepmind.com/blog/deepmind-papers-nips-part-1/</a></p>
<p><strong>DeepMind Papers @ NIPS (Part 2)</strong></p>
<p><a href="https://deepmind.com/blog/deepmind-papers-nips-part-2/">https://deepmind.com/blog/deepmind-papers-nips-part-2/</a></p>
<p><strong>DeepMind Papers @ NIPS (Part 3)</strong></p>
<p><a href="https://deepmind.com/blog/deepmind-papers-nips-part-3/">https://deepmind.com/blog/deepmind-papers-nips-part-3/</a></p>
<p><strong>NIPS 2016 Review, Days 0 & 1</strong></p>
<p><a href="https://gab41.lab41.org/nips-2016-review-day-1-6e504bcf1451#.ldaft47ea">https://gab41.lab41.org/nips-2016-review-day-1-6e504bcf1451#.ldaft47ea</a></p>
<p><strong>NIPS 2016 Review, Day 2</strong></p>
<p><a href="https://gab41.lab41.org/nips-2016-review-day-2-daff1088135e#.o9r8li43x">https://gab41.lab41.org/nips-2016-review-day-2-daff1088135e#.o9r8li43x</a></p>
<p><strong>NIPS 2016 — Day 1 Highlights</strong></p>
<p><a href="https://blog.insightdatascience.com/nips-2016-day-1-6ae1207cab82#.c248ycixg">https://blog.insightdatascience.com/nips-2016-day-1-6ae1207cab82#.c248ycixg</a></p>
<p><strong>NIPS 2016 — Day 2 Highlights: Platform wars, RL and RNNs</strong></p>
<p><a href="https://blog.insightdatascience.com/nips-2016-day-2-highlights-platform-wars-rl-and-rnns-9dca43bc1448#.zgtu1rtr0">https://blog.insightdatascience.com/nips-2016-day-2-highlights-platform-wars-rl-and-rnns-9dca43bc1448#.zgtu1rtr0</a></p>
<p><strong>50 things I learned at NIPS 2016</strong></p>
<p><a href="https://blog.ought.com/nips-2016-875bb8fadb8c#.f1a1161hq">https://blog.ought.com/nips-2016-875bb8fadb8c#.f1a1161hq</a></p>
<p><strong>NIPS 2016 Highlights</strong></p>
<ul>
<li>slides: <a href="http://www.slideshare.net/SebastianRuder/nips-2016-highlights-sebastian-ruder">http://www.slideshare.net/SebastianRuder/nips-2016-highlights-sebastian-ruder</a></li>
<li>mirror: <a href="https://pan.baidu.com/s/1kUKnCJ9">https://pan.baidu.com/s/1kUKnCJ9</a></li>
</ul>
<p><strong>Brad Neuberg’s NIPS 2016 Notes</strong></p>
<ul>
<li>blog: <a href="https://paper.dropbox.com/doc/Brad-Neubergs-NIPS-2016-Notes-XUFRdpNYyBhau0gWcybRo">https://paper.dropbox.com/doc/Brad-Neubergs-NIPS-2016-Notes-XUFRdpNYyBhau0gWcybRo</a></li>
</ul>
<p><strong>All Code Implementations for NIPS 2016 papers</strong></p>
<ul>
<li>reddit: <a href="https://www.reddit.com/r/MachineLearning/comments/5hwqeb/project_all_code_implementations_for_nips_2016/">https://www.reddit.com/r/MachineLearning/comments/5hwqeb/project_all_code_implementations_for_nips_2016/</a></li>
</ul>
<h1 id="heuritech-deep-learning-meetup">Heuritech Deep Learning Meetup</h1>
<p><strong>Heuritech Deep Learning Meetup #7: more than 100 attendees for convolutionnal neural networks</strong></p>
<ul>
<li>blog: <a href="https://blog.heuritech.com/2016/11/03/heuritech-deep-learning-meetup-7-more-than-100-attendees-for-convolutionnal-neural-networks/">https://blog.heuritech.com/2016/11/03/heuritech-deep-learning-meetup-7-more-than-100-attendees-for-convolutionnal-neural-networks/</a></li>
</ul>
<h1 id="eccv-2016">ECCV 2016</h1>
<p><strong>ECCV Brings Together the Brightest Minds in Computer Vision</strong></p>
<p><a href="https://research.facebook.com/blog/eccv-brings-together-the-brightest-minds-in-computer-vision/">https://research.facebook.com/blog/eccv-brings-together-the-brightest-minds-in-computer-vision/</a></p>
<p><strong>ECCV in a theatrical setting</strong></p>
<ul>
<li>blog: <a href="http://zoyathinks.blogspot.jp/2016/10/eccv-in-theatrical-setting.html">http://zoyathinks.blogspot.jp/2016/10/eccv-in-theatrical-setting.html</a></li>
</ul>
<h1 id="2nd-imagenet--coco-joint-workshop">2nd ImageNet + COCO Joint Workshop</h1>
<p><strong>2nd ImageNet and COCO Visual Recognition Challenges Joint Workshop</strong></p>
<p><a href="http://image-net.org/challenges/ilsvrc+coco2016">http://image-net.org/challenges/ilsvrc+coco2016</a></p>
<h1 id="dlss-2016">DLSS 2016</h1>
<p><strong>Montréal Deep Learning Summer School 2016</strong></p>
<ul>
<li>video lectures: <a href="http://videolectures.net/deeplearning2016_montreal/">http://videolectures.net/deeplearning2016_montreal/</a></li>
<li>material: <a href="https://github.com/mila-udem/summerschool2016">https://github.com/mila-udem/summerschool2016</a></li>
<li>slides: <a href="https://sites.google.com/site/deeplearningsummerschool2016/speakers">https://sites.google.com/site/deeplearningsummerschool2016/speakers</a></li>
<li>mirror: <a href="http://pan.baidu.com/s/1kUWrWI7">http://pan.baidu.com/s/1kUWrWI7</a></li>
</ul>
<p><strong>Highlights from the Deep Learning Summer School (Part 1)</strong></p>
<p><a href="https://vkrakovna.wordpress.com/2016/08/25/highlights-from-the-deep-learning-summer-school-part-1/">https://vkrakovna.wordpress.com/2016/08/25/highlights-from-the-deep-learning-summer-school-part-1/</a></p>
<p><strong>What I learned from Deep Learning Summer School 2016</strong></p>
<p><a href="https://www.linkedin.com/pulse/what-i-learned-from-deep-learning-summer-school-2016-hamid-palangi">https://www.linkedin.com/pulse/what-i-learned-from-deep-learning-summer-school-2016-hamid-palangi</a></p>
<h1 id="icml-2016">ICML 2016</h1>
<p><strong>10 Papers from ICML and CVPR</strong></p>
<p><a href="https://leotam.github.io/general/2016/07/12/ICMLcVPR.html">https://leotam.github.io/general/2016/07/12/ICMLcVPR.html</a></p>
<p><strong>ICML 2016 was awesome</strong></p>
<ul>
<li>blog: <a href="http://hunch.net/?p=4710099">http://hunch.net/?p=4710099</a></li>
</ul>
<p><strong>Highlights from ICML 2016</strong></p>
<p><a href="http://www.lunametrics.com/blog/2016/07/05/highlights-icml-2016/">http://www.lunametrics.com/blog/2016/07/05/highlights-icml-2016/</a></p>
<p><strong>ICML 2016 tutorials</strong></p>
<p><a href="http://icml.cc/2016/?page_id=97">http://icml.cc/2016/?page_id=97</a></p>
<p><strong>Deep Learning, Tools and Methods workshop</strong></p>
<ul>
<li>intro: 3 hour tutorials on Torch, Tensorflow and Talks by Yoshua Bengio, NVIDIA, AMD</li>
<li>homepage: <a href="https://portal.klewel.com/watch/webcast/deep-learning-tools-and-methods-workshop/">https://portal.klewel.com/watch/webcast/deep-learning-tools-and-methods-workshop/</a></li>
<li>slides: <a href="http://www.idiap.ch/workshop/dltm/">http://www.idiap.ch/workshop/dltm/</a></li>
<li>Torch tutorials: <a href="https://github.com/szagoruyko/idiap-tutorials">https://github.com/szagoruyko/idiap-tutorials</a></li>
</ul>
<p><strong>ICML 2016 Conference and Workshops</strong></p>
<ul>
<li>intro: talks, orals, tutorials</li>
<li>homepage: <a href="http://techtalks.tv/icml/2016/">http://techtalks.tv/icml/2016/</a></li>
</ul>
<h1 id="iclr-2016">ICLR 2016</h1>
<p><strong>Deep Learning Trends @ ICLR 2016</strong></p>
<p><a href="http://www.computervisionblog.com/2016/06/deep-learning-trends-iclr-2016.html">http://www.computervisionblog.com/2016/06/deep-learning-trends-iclr-2016.html</a></p>
<p><strong>WACV 2016: IEEE Winter Conference on Applications of Computer Vision</strong></p>
<ul>
<li>homepage: <a href="http://www.wacv16.org/">http://www.wacv16.org/</a></li>
<li>youtube: <a href="https://www.youtube.com/channel/UCdV5ooxkvhbpmv0_3MzIo9g/videos">https://www.youtube.com/channel/UCdV5ooxkvhbpmv0_3MzIo9g/videos</a></li>
</ul>
<p><strong>ICLR 2016 Takeaways: Adversarial Models & Optimization</strong></p>
<p><a href="https://indico.io/blog/iclr-2016-takeaways/">https://indico.io/blog/iclr-2016-takeaways/</a></p>
<p><strong>tensor talk - Latest AI Code: conference-iclr-2016</strong></p>
<p><a href="https://tensortalk.com/?cat=conference-iclr-2016">https://tensortalk.com/?cat=conference-iclr-2016</a></p>
<h1 id="cvpr-2016">CVPR 2016</h1>
<p><strong>CVPR 2016</strong></p>
<ul>
<li>homepage: <a href="http://cvpr2016.thecvf.com/program/main_conference">http://cvpr2016.thecvf.com/program/main_conference</a></li>
<li>Object Recognition and Detection: <a href="http://cvpr2016.thecvf.com/program/main_conference#O1-2A">http://cvpr2016.thecvf.com/program/main_conference#O1-2A</a></li>
<li>Object Detection 1: <a href="http://cvpr2016.thecvf.com/program/main_conference#S1-2A">http://cvpr2016.thecvf.com/program/main_conference#S1-2A</a></li>
<li>Object Detection 2: <a href="http://cvpr2016.thecvf.com/program/main_conference#S2-2A">http://cvpr2016.thecvf.com/program/main_conference#S2-2A</a></li>
</ul>
<p><strong>Workshop @ CVPR16: Deep Vision Workshop</strong></p>
<ul>
<li>youtube: <a href="https://www.youtube.com/playlist?list=PL_bDvITUYucC8uLRtWw8fdvVr3DdwzAeH">https://www.youtube.com/playlist?list=PL_bDvITUYucC8uLRtWw8fdvVr3DdwzAeH</a></li>
</ul>
<p><strong>Five Things I Learned at CVPR 2016</strong></p>
<ul>
<li>day 1: <a href="https://gab41.lab41.org/all-your-questions-answered-cvpr-day-1-40f488103076#.ejrgol28h">https://gab41.lab41.org/all-your-questions-answered-cvpr-day-1-40f488103076#.ejrgol28h</a></li>
<li>day 2: <a href="https://gab41.lab41.org/the-sounds-of-cvpr-day-2-f33a3625cbf3#.nifea1blu">https://gab41.lab41.org/the-sounds-of-cvpr-day-2-f33a3625cbf3#.nifea1blu</a></li>
<li>day 3: <a href="https://gab41.lab41.org/animated-gifs-and-video-clips-cvpr-day-3-96fdcfc36e2c#.x9wd86lym">https://gab41.lab41.org/animated-gifs-and-video-clips-cvpr-day-3-96fdcfc36e2c#.x9wd86lym</a></li>
<li>day 4: <a href="https://gab41.lab41.org/caption-this-cvpr-day-4-8fe94d7aeb71#.rhzd3zg5j">https://gab41.lab41.org/caption-this-cvpr-day-4-8fe94d7aeb71#.rhzd3zg5j</a></li>
<li>day 5: <a href="https://gab41.lab41.org/five-things-i-learned-at-cvpr-2016-5e857c017f7b#.umag6vs3v">https://gab41.lab41.org/five-things-i-learned-at-cvpr-2016-5e857c017f7b#.umag6vs3v</a></li>
</ul>
<h1 id="valse-2016">VALSE 2016</h1>
<p><strong>VALSE 2016</strong></p>
<p><a href="http://mclab.eic.hust.edu.cn/valse2016/program.html">http://mclab.eic.hust.edu.cn/valse2016/program.html</a></p>
<p><strong>Science: Table of Contents: Artificial Intelligence</strong></p>
<p><a href="http://science.sciencemag.org/content/349/6245.toc">http://science.sciencemag.org/content/349/6245.toc</a></p>
<p><strong>Deep Learning and the Future of AI</strong></p>
<ul>
<li>author: by Prof. Yann LeCun (Director of AI Research at Facebook & Professor at NYU)</li>
<li>homapage: <a href="http://indico.cern.ch/event/510372/">http://indico.cern.ch/event/510372/</a></li>
<li>slides: <a href="http://indico.cern.ch/event/510372/attachments/1245509/1840815/lecun-20160324-cern.pdf">http://indico.cern.ch/event/510372/attachments/1245509/1840815/lecun-20160324-cern.pdf</a></li>
</ul>
<h1 id="icml-2015">ICML 2015</h1>
<p><strong>Video Recordings of the ICML’15 Deep Learning Workshop</strong></p>
<ul>
<li>homepage: <a href="http://dpkingma.com/?page_id=483">http://dpkingma.com/?page_id=483</a></li>
<li>youtube: <a href="https://www.youtube.com/playlist?list=PLdH9u0f1XKW8cUM3vIVjnpBfk_FKzviCu">https://www.youtube.com/playlist?list=PLdH9u0f1XKW8cUM3vIVjnpBfk_FKzviCu</a></li>
</ul>
<h1 id="iccv-2015">ICCV 2015</h1>
<p><strong>International Conference on Computer Vision (ICCV) 2015, Santiago</strong></p>
<p><a href="http://videolectures.net/iccv2015_santiago/">http://videolectures.net/iccv2015_santiago/</a></p>
<p><strong>ICCV 2015 Tutorial on Tools for Efficient Object Detection</strong></p>
<p><a href="http://mp7.watson.ibm.com/ICCV2015/ObjectDetectionICCV2015.html">http://mp7.watson.ibm.com/ICCV2015/ObjectDetectionICCV2015.html</a></p>
<p><strong>ICCV 2015 Tutorials</strong></p>
<p><a href="http://pamitc.org/iccv15/tutorials.php">http://pamitc.org/iccv15/tutorials.php</a></p>
<p><strong>ICCV 2015 Tutorial on Tools for Efficient Object Detection</strong></p>
<p><a href="http://mp7.watson.ibm.com/ICCV2015/ObjectDetectionICCV2015.html">http://mp7.watson.ibm.com/ICCV2015/ObjectDetectionICCV2015.html</a></p>
<h1 id="imagenet--coco-joint-workshop">ImageNet + COCO Joint Workshop</h1>
<p><strong>ImageNet and MS COCO Visual Recognition Challenges Joint Workshop</strong></p>
<p><a href="http://image-net.org/challenges/ilsvrc+mscoco2015">http://image-net.org/challenges/ilsvrc+mscoco2015</a></p>
<p><strong>OpenAI: Some thoughts, mostly questions</strong></p>
<p><a href="https://medium.com/@kleinsound/openai-some-thoughts-mostly-questions-30fb63d53ef0#.32u1yt6oy">https://medium.com/@kleinsound/openai-some-thoughts-mostly-questions-30fb63d53ef0#.32u1yt6oy</a></p>
<p><strong>OpenAI — quick thoughts</strong></p>
<p><a href="http://wp.goertzel.org/openai-quick-thoughts/">http://wp.goertzel.org/openai-quick-thoughts/</a></p>
<h1 id="nips-2015">NIPS 2015</h1>
<p><strong>NIPS 2015 workshop on non-convex optimization</strong></p>
<p><a href="http://www.offconvex.org/2016/01/25/non-convex-workshop/">http://www.offconvex.org/2016/01/25/non-convex-workshop/</a></p>
<p><strong>10 Deep Learning Trends at NIPS 2015</strong></p>
<p><a href="http://codinginparadise.org/ebooks/html/blog/ten_deep_learning_trends_at_nips_2015.html">http://codinginparadise.org/ebooks/html/blog/ten_deep_learning_trends_at_nips_2015.html</a></p>
<p><strong>NIPS 2015 – Deep RL Workshop</strong></p>
<p><a href="https://gridworld.wordpress.com/2015/12/13/nips-2015-deep-rl-workshop/">https://gridworld.wordpress.com/2015/12/13/nips-2015-deep-rl-workshop/</a></p>
<p><strong>My takeaways from NIPS 2015</strong></p>
<ul>
<li>blog: <a href="http://www.danvk.org/2015/12/12/nips-2015.html">http://www.danvk.org/2015/12/12/nips-2015.html</a></li>
</ul>
<p><strong>On the spirit of NIPS 2015 and OpenAI</strong></p>
<ul>
<li>blog: <a href="https://blogs.princeton.edu/imabandit/2015/12/13/on-the-spirit-of-nips-2015-and-openai/">https://blogs.princeton.edu/imabandit/2015/12/13/on-the-spirit-of-nips-2015-and-openai/</a></li>
</ul>
<p><strong>NIPS 2015</strong></p>
<ul>
<li>Part 1: <a href="https://memming.wordpress.com/2015/12/07/nips-2015-part-1/">https://memming.wordpress.com/2015/12/07/nips-2015-part-1/</a></li>
<li>Part 2: <a href="https://memming.wordpress.com/2015/12/09/nips-2015-part-2/">https://memming.wordpress.com/2015/12/09/nips-2015-part-2/</a></li>
</ul>
<p><strong>Deep Learning - NIPS’2015 Tutorial (By Geoff Hinton, Yoshua Bengio & Yann LeCun)</strong></p>
<ul>
<li>slides: <a href="http://www.iro.umontreal.ca/~bengioy/talks/DL-Tutorial-NIPS2015.pdf">http://www.iro.umontreal.ca/~bengioy/talks/DL-Tutorial-NIPS2015.pdf</a></li>
</ul>
<p><strong>NIPS 2015 Posner Lecture – Zoubin Ghahramani: Probabilistic Machine Learning</strong></p>
<p><a href="https://gridworld.wordpress.com/2015/12/08/nips-2015-posner-lecture-zoubin-ghahramani/">https://gridworld.wordpress.com/2015/12/08/nips-2015-posner-lecture-zoubin-ghahramani/</a></p>
<p><strong>NIPS 2015 Deep Learning Tutorial Notes</strong></p>
<p><a href="http://jatwood.org/blog/nips-deep-learning-tutorial.html">http://jatwood.org/blog/nips-deep-learning-tutorial.html</a></p>
<h1 id="dlss-2015">DLSS 2015</h1>
<p><strong>26 Things I Learned in the Deep Learning Summer School</strong></p>
<p><a href="http://www.marekrei.com/blog/26-things-i-learned-in-the-deep-learning-summer-school/">http://www.marekrei.com/blog/26-things-i-learned-in-the-deep-learning-summer-school/</a> <br />
<a href="http://www.csdn.net/article/2015-09-16/2825716">http://www.csdn.net/article/2015-09-16/2825716</a></p>
<p><strong>Deep Learning Summer School 2015</strong></p>
<ul>
<li>homepage: <a href="https://sites.google.com/site/deeplearningsummerschool/schedule">https://sites.google.com/site/deeplearningsummerschool/schedule</a></li>
<li>slides: <a href="http://docs.huihoo.com/deep-learning/deeplearningsummerschool/2015/">http://docs.huihoo.com/deep-learning/deeplearningsummerschool/2015/</a></li>
<li>github: <a href="https://github.com/mila-udem/summerschool2015">https://github.com/mila-udem/summerschool2015</a></li>
</ul>
<h1 id="iclr-2015">ICLR 2015</h1>
<p><strong>Conference Schedule</strong></p>
<p><a href="http://www.iclr.cc/doku.php?id=iclr2015:main&utm_content=buffer0b339&utm_campaign=buffer#conference_schedule">http://www.iclr.cc/doku.php?id=iclr2015:main&utm_content=buffer0b339&utm_campaign=buffer#conference_schedule</a></p>
<h1 id="cvpr-2014">CVPR 2014</h1>
<p><strong>TUTORIAL ON DEEP LEARNING FOR VISION</strong></p>
<p><a href="https://sites.google.com/site/deeplearningcvpr2014/">https://sites.google.com/site/deeplearningcvpr2014/</a></p>
Courses2017-11-28T00:00:00+00:00https://handong1587.github.io/study/2017/11/28/courses<p><strong>CS 007: PERSONAL FINANCE FOR ENGINEERS</strong></p>
<ul>
<li>intro: Stanford University 2017-8</li>
<li>homepage: <a href="https://cs007.blog/">https://cs007.blog/</a></li>
</ul>
PyInstsaller and Others2016-12-24T00:00:00+00:00https://handong1587.github.io/programming_study/2016/12/24/pyinstaller-and-others<h1 id="quick-introduction">Quick introduction</h1>
<p>I recently need to convert one Python program into binary mode program.
That is, you don’t want to expose any of your source code, data files,
only one binary executable file will be provided.</p>
<p><a href="http://www.pyinstaller.org/">PyInstaller</a> is a fairly good choice to use,
and can work on many platforms like Linux, Windows, etc.</p>
<p>You can check out its official git repository at
<a href="https://github.com/pyinstaller/pyinstaller">https://github.com/pyinstaller/pyinstaller</a>.</p>
<p>It is recommended that first try out its officially, stable release –
but when something weird come just around, you can turn to the github dev branch for help – actually that is what I did.</p>
<h1 id="hidden-import">hidden-import</h1>
<p>There are 2 basic ways to process Python scripts. I chose to use pyinstaller.py directly,
although you can use <em>spec</em> file if you want.</p>
<p>When building Python scripts, you probably will get some build errors telling you that some Python packages cannot be imported.
Like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ImportError: The 'packaging' package is required
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ImportError: No module named core_cy
</code></pre></div></div>
<p>I might explain it in the future, but to put it simply, some Python packages need to be “hidden-import” to get around this issue.
So now we can setup a fundamental build script to help our work:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/path/to/git/pyinstaller/pyinstaller.py \
--onefile \
--hidden-import=skimage.io \
--hidden-import=skimage.transform \
--hidden-import=skimage.filter.rank.core_cy \
--hidden-import=packaging \
--hidden-import=packaging.version \
--hidden-import=packaging.specifiers \
--hidden-import=packaging.requirements \
--hidden-import=scipy.linalg \
--hidden-import=scipy.linalg.cython_blas \
--hidden-import=scipy.linalg.cython_lapack \
--hidden-import=scipy.ndimage \
--hidden-import=skimage._shared.interpolation \
--hidden-import=google.protobuf.internal \
--hidden-import=google.protobuf.internal.enum_type_wrapper \
--hidden-import=google.protobuf.descriptor \
target_program.py
</code></pre></div></div>
<h1 id="what-is-wrong-with-mkl">What is wrong with MKL</h1>
<p>One weird error I met was the Intel MKL FATAL ERROR:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
</code></pre></div></div>
<p>Since I use anaconda, I find MKL has already been installed on the anaconda install location
and can find these 2 files easily, but this error still pop out.
If I remember correctly, the solution is even more weird:
simply update numpy to a latest version:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>conda update numpy
</code></pre></div></div>
<p>or:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>conda install linux-64_numpy-1.11.2-py27_0.tar.bz2
</code></pre></div></div>
<p>I don’t know what happened exactly but looks like it been fixed. Hmm…</p>
<h1 id="add-data-and-_meipass">–add-data and _MEIPASS</h1>
<p>PyInstaller can also bundle data files to your programs. When bundled app runs,
it will load these data files, in a different location.
Here is a helper function to locate your data files:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def resource_path(relative):
bundle_dir = os.environ.get("_MEIPASS2", os.path.abspath("."))
if getattr(sys, 'frozen', False):
# we are running in a bundle
bundle_dir = sys._MEIPASS
else:
# we are running in a normal Python environment
bundle_dir = os.path.dirname(os.path.abspath(__file__))
return os.path.join(bundle_dir, relative)
</code></pre></div></div>
<p>You can put your data file in your local directory,
but need to specify the data file name in Python script in a right way:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>target_file = resource_path('target_data_file1')
</code></pre></div></div>
<p>In build script, you need to configure the data files or folders:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> --add-data="target_data_file1:." \
--add-data="target_data_file2:." \
--add-data="folder1/sub_folder1/target_data_file3:folder1/sub_folder1/target_data_file3" \
</code></pre></div></div>
<h1 id="missing-libs">Missing libs</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> --add-binary="libgfortran.so.1:lib" \
</code></pre></div></div>
<p>The build error told me one *so file is required. So just add it.</p>
<h1 id="config-pythonpath">Config PYTHONPATH</h1>
<p>Some of your Python scripts might depends on some relative path,
so you will need to put this dependencies into the build script:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--paths="../dependency_folder" \
</code></pre></div></div>
<h1 id="continue-tackling-weird-stuffs">Continue tackling weird stuffs</h1>
<p>Util now it sounds like an easy task.
But what happened next consumed me about 2 days – I wish I could have known how to avoid it :-(</p>
<p>My Python project includes A Caffe module which run a simple image classification process.
One basic function is <a href="https://github.com/BVLC/caffe">Caffe</a> calling skimage.io to load image:</p>
<p><a href="https://github.com/BVLC/caffe/blob/master/python/caffe/io.py">https://github.com/BVLC/caffe/blob/master/python/caffe/io.py</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def load_image(filename, color=True):
img = skimage.img_as_float(skimage.io.imread(filename, as_grey=not color)).astype(np.float32)
if img.ndim == 2:
img = img[:, :, np.newaxis]
if color:
img = np.tile(img, (1, 1, 3))
elif img.shape[2] == 4:
img = img[:, :, :3]
return img
</code></pre></div></div>
<p>I wonder if PyInstaller currently has a good support for Python package skimage.
But from what I know by now, it doesn’t.</p>
<p>Run from Python source code files, it works fine. But when I packed all things into one single binary file,
it can not load image at all. And after debugging and googleing for a long time –
I always thought maybe I did something wrong – I get rid of this. PyInstaller hates skimage!
So at last I use cv2 instead. And it works smoothly.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def cv2_load_image(filename, color=True):
img = cv2.imread(filename).astype(np.float32) / 255
if img.ndim == 3:
img[:,:,:] = img[:,:,2::-1]
if img.ndim == 2:
img = img[:, :, np.newaxis]
if color:
img = np.tile(img, (1, 1, 3))
elif img.shape[2] == 4:
img = img[:, :, :3]
return img
</code></pre></div></div>
<p>For all above details, please do check out PyInstaller Documentation:
<a href="https://media.readthedocs.org/pdf/pyinstaller/latest/pyinstaller.pdf">https://media.readthedocs.org/pdf/pyinstaller/latest/pyinstaller.pdf</a></p>
<h1 id="looks-like-we-make-it">Looks like we make it!</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/path/to/git/pyinstaller/pyinstaller.py \
--onefile \
--hidden-import=skimage.io \
--hidden-import=skimage.transform \
--hidden-import=skimage.filter.rank.core_cy \
--hidden-import=packaging \
--hidden-import=packaging.version \
--hidden-import=packaging.specifiers \
--hidden-import=packaging.requirements \
--hidden-import=scipy.linalg \
--hidden-import=scipy.linalg.cython_blas \
--hidden-import=scipy.linalg.cython_lapack \
--hidden-import=scipy.ndimage \
--hidden-import=skimage._shared.interpolation \
--hidden-import=google.protobuf.internal \
--hidden-import=google.protobuf.internal.enum_type_wrapper \
--hidden-import=google.protobuf.descriptor \
--add-binary="libgfortran.so.1:lib" \
--add-data="target_data_file1:." \
--add-data="target_data_file2:." \
--add-data="folder1/sub_folder1/target_data_file3:folder1/sub_folder1/target_data_file3" \
--paths="../dependency_folder" \
target_program.py
</code></pre></div></div>
<h1 id="misc">Misc</h1>
<p>I just find a simple method to read/write binary file via Python:
using cPickle to dump data to file in binary format.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import cPickle
a = ('img_path1', 1111, 222.222, 333, 444, 555, 6666)
b = ('img_path2', 777, 88.8888, 9999, 1010, 1111, 1212)
c = []
c.append(a)
c.append(b)
with open('wb_txt', 'wb') as f:
cPickle.dump(c, f, cPickle.HIGHEST_PROTOCOL)
with open('wb_txt', 'rb') as f:
data = cPickle.load(f)
print data
</code></pre></div></div>
<p>Hopefully this note can guide someone new to PyInstaller like me to walk out of sloughy.</p>
C++ Programming Solutions2016-09-07T00:00:00+00:00https://handong1587.github.io/programming_study/2016/09/07/cpp-programming-solutions<h1 id="reference-a-nonstatic-mfc-class-member-in-a-static-thread-function">Reference a nonstatic MFC class member in a static thread function</h1>
<p>Declare a thread function:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>static DWORD WINAPI ThreadFunc(LPVOID lpParameter);
</code></pre></div></div>
<p>Pass a <code class="language-plaintext highlighter-rouge">this</code> pointer to thread function:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>HANDLE hThread = CreateThread(NULL, 0, ThreadFunc, this, 0, NULL);
</code></pre></div></div>
<p>In the thread function definition:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>DWORD WINAPI CMFCDemoDlg::ThreadFunc(LPVOID lpParameter)
{
//convert lpParameter to class pointer type
CMFCDemoDlg* pMfcDemo = (CMFCDemoDlg*)lpParameter;
// Now you can reference the CMFCDemoDlg class members
......
}
</code></pre></div></div>
Add Lunr Search Plugin For Blog2016-07-31T00:00:00+00:00https://handong1587.github.io/web_dev/2016/07/31/add-lunr-search-plugin-for-blog<p>I decided to add a full-text search plugin to my blog:</p>
<p><a href="https://github.com/slashdotdash/jekyll-lunr-js-search">https://github.com/slashdotdash/jekyll-lunr-js-search</a> .</p>
<p>Although it should be an easy work, there are still some rules I think are somewhat crucial to follow (for me..).</p>
<p>First rule: DO NOT try to do this on Windows.</p>
<p>On windows (and OS X), you can not even manage to gem install therubyracer, which is essential component required by jekyll-lunr-js-search.
See my previous post:</p>
<p><a href="http://handong1587.github.io/web_dev/2016/07/03/install-therubyracer.html">http://handong1587.github.io/web_dev/2016/07/03/install-therubyracer.html</a></p>
<p>Keep yourself aware that you don’t include jQuery twice. It can really cause all sorts of issues.</p>
<p>This post explains in a more detail:</p>
<p><strong>Double referencing jQuery deletes all assigned plugins.</strong></p>
<p><a href="https://bugs.jquery.com/ticket/10066">https://bugs.jquery.com/ticket/10066</a></p>
<p>It kept me receiving one wired error like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>TypeError: $(...).lunrSearch is not a function
</code></pre></div></div>
<p>and took me a long time to find out why this happened.</p>
<p>For a newbie like me who <em>know nothing at all</em> about front-end web development,
all the work become trial and error, and google plus stackoverflow. So great now it can work.</p>
<p>Thanks to <em>My Chemical Romance</em> for helping me through those tough debugging nights!</p>
vsftpd Commands2016-07-28T00:00:00+00:00https://handong1587.github.io/linux_study/2016/07/28/vsftpd-cmd<p>FTP命令是Internet用户使用最频繁的命令之一,不论是在DOS还是UNIX操作系统下使用FTP,都会遇到大量的FTP内部命令。
熟悉并灵活应用FTP的内部命令,可以大大方便使用者,并收到事半功倍之效。
FTP的命令行格式为: ftp -v -d -i -n -g [主机名] ,其中 -v 显示远程服务器的所有响应信息;
-n 限制ftp的自动登录,即不使用;
.n etrc文件;
-d 使用调试方式;
-g 取消全局文件名。</p>
<p>ftp使用的内部命令如下(中括号表示可选项):</p>
<p>1.![cmd[args]]:在本地机中执行交互shell,exit回到ftp环境,如: !ls*.zip.</p>
<p>2.$ macro-ame[args]:执行宏定义macro-name.</p>
<p>3.account[password]:提供登录远程系统成功后访问系统资源所需的补充口令。</p>
<p>4.append local-file[remote-file]:将本地文件追加到远程系统主机,若未指定远程系统文件名,则使用本地文件名。</p>
<p>5.ascii:使用ascii类型传输方式。</p>
<p>6.bell:每个命令执行完毕后计算机响铃一次。</p>
<p>7.bin:使用二进制文件传输方式。</p>
<p>8.bye:退出ftp会话过程。</p>
<p>9.case:在使用mget时,将远程主机文件名中的大写转为小写字母。</p>
<p>10.<code class="language-plaintext highlighter-rouge">cd remote-dir</code>:进入远程主机目录。</p>
<p>11.cdup:进入远程主机目录的父目录。</p>
<p>12.<code class="language-plaintext highlighter-rouge">chmod mode file-name</code>:将远程主机文件file-name的存取方式设置为mode,如:<code class="language-plaintext highlighter-rouge">chmod 777 a.out</code>。</p>
<p>13.close:中断与远程服务器的ftp会话(与open对应)。</p>
<p>14.cr:使用asscii方式传输文件时,将回车换行转换为回行。</p>
<p>15.<code class="language-plaintext highlighter-rouge">delete remote-file</code>:删除远程主机文件。</p>
<p>16.debug [debug-value]:设置调试方式,显示发送至远程主机的每条命令,如:deb up 3,若设为0,表示取消debug。</p>
<p>17.<code class="language-plaintext highlighter-rouge">dir [remote-dir] [local-file]</code>:显示远程主机目录,并将结果存入本地文件local-file。</p>
<p>18.disconnection:同close。</p>
<p>19.form format:将文件传输方式设置为format,缺省为file方式。</p>
<p>20.<code class="language-plaintext highlighter-rouge">get remote-file [local-file]</code>:将远程主机的文件remote-file传至本地硬盘的local-file。</p>
<p>21.glob:设置mdelete,mget,mput的文件名扩展,缺省时不扩展文件名,同命令行的-g参数。</p>
<p>22.hash:每传输1024字节,显示一个hash符号(#)。</p>
<p>23.help [cmd]:显示ftp内部命令cmd的帮助信息,如:help get。</p>
<p>24.idle [seconds]:将远程服务器的休眠计时器设为[seconds]秒。</p>
<p>25.image:设置二进制传输方式(同binary)。</p>
<p>26.lcd [dir]:将本地工作目录切换至dir。</p>
<p>27.<code class="language-plaintext highlighter-rouge">ls [remote-dir] [local-file]</code>:显示远程目录remote-dir,并存入本地文件local-file。</p>
<p>28.macdef macro-name:定义一个宏,遇到macdef下的空行时,宏定义结束。</p>
<p>29.<code class="language-plaintext highlighter-rouge">mdelete [remote-file]</code>:删除远程主机文件。</p>
<p>30.<code class="language-plaintext highlighter-rouge">mdir remote-files local-file</code>:与dir类似,但可指定多个远程文件,如:mdir <em>.o.</em>.zipoutfile。</p>
<p>31.<code class="language-plaintext highlighter-rouge">mget remote-files</code>:传输多个远程文件。</p>
<p>32.<code class="language-plaintext highlighter-rouge">mkdir dir-name</code>:在远程主机中建一目录。</p>
<p>33.<code class="language-plaintext highlighter-rouge">mls remote-file local-file</code>:同nlist,但可指定多个文件名。</p>
<p>34.mode [modename]:将文件传输方式设置为modename,缺省为stream方式。</p>
<p>35.modtime file-name:显示远程主机文件的最后修改时间。</p>
<p>36.mput local-file:将多个文件传输至远程主机。</p>
<p>37.newer file-name: 如果远程机中file-name的修改时间比本地硬盘同名文件的时间更近,则重传该文件。</p>
<p>38.nlist [remote-dir] [local-file]:显示远程主机目录的文件清单,并存入本地硬盘的local-file。</p>
<p>39.nmap [inpattern outpattern]:设置文件名映射机制,使得文件传输时,文件中的某些字符相互转换,如:nmap $1.$2.$3[$1,$2].[$2,$3],则传输文件a1.a2.a3时,文件名变为a1,a2。该命令特别适用于远程主机为非UNIX机的情况。</p>
<p>40.ntrans [inchars[outchars]]:设置文件名字符的翻译机制,如ntrans 1R,则文件名LLL将变为RRR。</p>
<p>41.open host [port]:建立指定ftp服务器连接,可指定连接端口。</p>
<p>42.passive:进入被动传输方式。</p>
<p>43.prompt:设置多个文件传输时的交互提示。</p>
<p>44.proxy ftp-cmd:在次要控制连接中,执行一条ftp命令, 该命令允许连接两个ftp服务器,以在两个服务器间传输文件。第一条ftp命令必须为open,以首先建立两个服务器间的连接。</p>
<p>45.<code class="language-plaintext highlighter-rouge">put local-file [remote-file]</code>:将本地文件local-file传送至远程主机。</p>
<p>46.pwd:显示远程主机的当前工作目录。</p>
<p>47.quit:同bye,退出ftp会话。</p>
<p>48.quote arg1,arg2…:将参数逐字发至远程ftp服务器,如:quote syst.</p>
<p>49.<code class="language-plaintext highlighter-rouge">recv remote-file [local-file]</code>:同get。</p>
<p>50.<code class="language-plaintext highlighter-rouge">reget remote-file [local-file]</code>:类似于get,但若local-file存在,则从上次传输中断处续传。</p>
<p>51.rhelp [cmd-name]:请求获得远程主机的帮助。</p>
<p>52.rstatus [file-name]:若未指定文件名,则显示远程主机的状态,否则显示文件状态。</p>
<p>53.<code class="language-plaintext highlighter-rouge">rename [from] [to]</code>:更改远程主机文件名。</p>
<p>54.reset:清除回答队列。</p>
<p>55.restart marker:从指定的标志marker处,重新开始get或put,如:restart 130。</p>
<p>56.<code class="language-plaintext highlighter-rouge">rmdir dir-name</code>:删除远程主机目录。</p>
<p>57.runique:设置文件名唯一性存储,若文件存在,则在原文件后加后缀 ..1,.2等。</p>
<p>58.send local-file[remote-file]:同put。</p>
<p>59.sendport:设置PORT命令的使用。</p>
<p>60.site arg1,arg2…:将参数作为SITE命令逐字发送至远程ftp主机。</p>
<p>61.<code class="language-plaintext highlighter-rouge">size file-name</code>:显示远程主机文件大小,如:site idle 7200。</p>
<p>62.<code class="language-plaintext highlighter-rouge">status</code>:显示当前ftp状态。</p>
<p>63.struct [struct-name]:将文件传输结构设置为struct-name,缺省时使用stream结构。</p>
<p>64.sunique:将远程主机文件名存储设置为唯一(与runique对应)。</p>
<p>65.system:显示远程主机的操作系统类型。</p>
<p>66.tenex:将文件传输类型设置为TENEX机的所需的类型。</p>
<p>67.tick:设置传输时的字节计数器。</p>
<p>68.trace:设置包跟踪。</p>
<p>69.type [type-name]:设置文件传输类型为type-name,缺省为ascii,如:type binary,设置二进制传输方式。</p>
<p>70.umask [newmask]:将远程服务器的缺省umask设置为newmask,如:umask 3。</p>
<p>71.<code class="language-plaintext highlighter-rouge">user user-name [password] [account]</code>:向远程主机表明自己的身份,需要口令时,必须输入口令,如:user anonymous my@email。</p>
<p>72.verbose:同命令行的-v参数,即设置详尽报告方式,ftp服务器的所有 响应都将显示给用户,缺省为on.</p>
<p>73.?[cmd]:同help.</p>
<h1 id="ref">Ref</h1>
<p><a href="http://www.jb51.net/os/RedHat/1133.html">http://www.jb51.net/os/RedHat/1133.html</a></p>
Setup vsftpd on Ubuntu 14.102016-07-27T00:00:00+00:00https://handong1587.github.io/linux_study/2016/07/27/setup-vsftpd<h1 id="setup-vsftpd">Setup vsftpd</h1>
<p>Install vsftpd:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install vsftpd
</code></pre></div></div>
<p>Check if vsftpd installed successfully:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo service vsftpd status
</code></pre></div></div>
<p>Add <code class="language-plaintext highlighter-rouge">/home/uftp</code> as user home directory:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo mkdir /data/jinbin.lin/uftp
</code></pre></div></div>
<p>Add user <code class="language-plaintext highlighter-rouge">uftp</code> and set password:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo useradd -d /data/jinbin.lin/uftp -s /bin/bash uftp
</code></pre></div></div>
<p>Set user password (need to enter password twice):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo passwd uftp
</code></pre></div></div>
<p>Edit vsftpd configuration file:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/etc/vsftpd.conf
</code></pre></div></div>
<p>Add following commands at the end of <code class="language-plaintext highlighter-rouge">vsftpd.conf</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>userlist_deny=NO
userlist_enable=YES
userlist_file=/etc/allowed_users
</code></pre></div></div>
<p>Modify following configurations:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local_enable=YES
write_enable=YES
</code></pre></div></div>
<p>Edit <code class="language-plaintext highlighter-rouge">/etc/allowed_users</code>,add username: uftp</p>
<p>Check file <code class="language-plaintext highlighter-rouge">/etc/ftpusers</code>, delete <code class="language-plaintext highlighter-rouge">uftp</code> (if file contains this username).
This file recording usernames which are forbidden to access FTP server.</p>
<p>Restart vsftpd:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo service vsftpd restart
</code></pre></div></div>
<h1 id="close-ftp-server">Close FTP server</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo service vsftpd stop
</code></pre></div></div>
<h1 id="visit-ftp-server">Visit FTP server</h1>
<p>(By default, the anonymous user is disabled)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ftp://user:password@hostname/
</code></pre></div></div>
<h1 id="forbid-user-access-top-level-directory">Forbid user access top level directory</h1>
<p>Create file <code class="language-plaintext highlighter-rouge">vsftpd.chroot_list</code> but don’t add anything:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo touch /etc/vsftpd.chroot_list
</code></pre></div></div>
<p>Modify configurations as following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chroot_local_user=YES
chroot_list_enable=NO
chroot_list_file=/etc/vsftpd.chroot_list
</code></pre></div></div>
<p>If want to have write permission to user home directory (otherwise you would meet this error when login:
“500 OOPS: vsftpd: refusing to run with writable root inside chroot ()”):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>allow_writeable_chroot=YES
</code></pre></div></div>
<p>Restart vsftpd:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo service vsftpd restart
</code></pre></div></div>
<h1 id="does-not-allow-the-user-to-change-the-specified-chroot_list_file-root">Does not allow the user to change the specified chroot_list_file root</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chroot_local_user=NO
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list
</code></pre></div></div>
<h1 id="allows-only-specified-users-to-change-chroot_list_file-root">Allows only specified users to change chroot_list_file root</h1>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list
</code></pre></div></div>
<h1 id="frequently-used-command">Frequently used command</h1>
<p><code class="language-plaintext highlighter-rouge">mkdir</code></p>
<p><code class="language-plaintext highlighter-rouge">dir</code> or <code class="language-plaintext highlighter-rouge">ls</code></p>
<p><code class="language-plaintext highlighter-rouge">put</code></p>
<p><code class="language-plaintext highlighter-rouge">get</code></p>
<h1 id="refs">Refs</h1>
<p><strong>How to Install and Configure vsftpd on Ubuntu 14.04 LTS</strong></p>
<p><a href="http://www.liquidweb.com/kb/how-to-install-and-configure-vsftpd-on-ubuntu-14-04-lts/">http://www.liquidweb.com/kb/how-to-install-and-configure-vsftpd-on-ubuntu-14-04-lts/</a></p>
<p><strong>vsftpd 配置:chroot_local_user与chroot_list_enable详解</strong></p>
<p><a href="http://blog.csdn.net/bluishglc/article/details/42398811">http://blog.csdn.net/bluishglc/article/details/42398811</a></p>