{"id":3085,"date":"2017-09-11T00:00:54","date_gmt":"2017-09-11T07:00:54","guid":{"rendered":"http:\/\/192.168.3.4\/?p=3085"},"modified":"2018-01-09T06:51:49","modified_gmt":"2018-01-09T14:51:49","slug":"depthmapping-with-ffmpeg-and-opencv","status":"publish","type":"post","link":"https:\/\/www.cloudacm.com\/?p=3085","title":{"rendered":"Depthmapping with FFMpeg and OpenCV"},"content":{"rendered":"<p>Depthmaps are typically greyscale images that represent how far an object is by the degree of shade.\u00a0 They can be white-near, black-near, or some other color scale such as ironbar.\u00a0 Depthmaps are useful for creating renders or systems that have a distance sensing need.<\/p>\n<p>In this post I&#8217;ll be using footage that I shot using 2 Innovv cameras.\u00a0 The rig was not precise, so I also post processed it to correct distortion and alignment issues.\u00a0 The post processing topic was covered in last weeks post.<\/p>\n<p><iframe loading=\"lazy\" title=\"SimpleCV Depthmap using FFMpeg\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/-OsJ9BTnkyc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>What we&#8217;ll do is take a segment from the left and right cameras and use that for our depthmapping.\u00a0 Here are the commands to get the segments.<\/p>\n<pre>ffmpeg -ss 00:00:00 -i \"\/home\/local\/Desktop\/STP 2017\/Anaglyph\/L0000004-Right-Defish-Rotate-Trim.MOV\" -t 00:00:15 -acodec copy -vcodec copy -async 1 \"\/home\/local\/Desktop\/DepthMap\/Right\/Right_Seg.MOV\"\r\nffmpeg -ss 00:00:00 -i \"\/home\/local\/Desktop\/STP 2017\/Anaglyph\/L0000004-Left-Defish-Trim.MOV\" -t 00:00:15 -acodec copy -vcodec copy -async 1 \"\/home\/local\/Desktop\/DepthMap\/Left\/Left_Seg.MOV\"\r\n<\/pre>\n<p>The OpenCV script is based off of the work demonstrated here, <a href=\"http:\/\/docs.opencv.org\/3.1.0\/dd\/d53\/tutorial_py_depthmap.html\">http:\/\/docs.opencv.org\/3.1.0\/dd\/d53\/tutorial_py_depthmap.html<\/a><\/p>\n<p>The python script only works with single images.\u00a0 I have modified it so it will process a sequence of image files that have incremental numbering in the file name.\u00a0 First I&#8217;ll convert our video segments to images using this command.<\/p>\n<pre>ffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/Right\/Right_Seg.MOV\" -vf fps=30 \"\/home\/local\/Desktop\/DepthMap\/Right\/Right_Seg%d.png\"\r\nffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/Left\/Left_Seg.MOV\" -vf fps=30 \"\/home\/local\/Desktop\/DepthMap\/Left\/Left_Seg%d.png\"\r\n<\/pre>\n<p>Now that I have the images, the python script can be run to process the images.\u00a0 Here is the python code.<\/p>\n<pre>==== DepthMap.py\r\n\r\nimport numpy as np\r\nimport cv2\r\nfrom matplotlib import pyplot as plt\r\n\r\ncount = 0\r\nRimg = '\/home\/local\/Desktop\/DepthMap\/Right\/Right_Seg'\r\nLimg = '\/home\/local\/Desktop\/DepthMap\/Left\/Left_Seg'\r\nDimg = '\/home\/local\/Desktop\/DepthMap\/Depth\/Depth_Seg'\r\nExt = '.png'\r\n\r\n# The values in range(x,y,z) are x=start, y=end, z=step\r\nfor N in range(1,453,1):\r\ncount = count + 1\r\n\r\nimgR = cv2.imread(Rimg + str(count + 4) + Ext, 0) # Frames not alligned, this compinsates\r\nimgL = cv2.imread(Limg + str(count) + Ext, 0)\r\n\r\nstereo = cv2.StereoBM(1, 16, 15)\r\ndisparity = stereo.compute(imgL,imgR)\r\n\r\nplt.imshow(disparity,'gray')\r\nplt.savefig(Dimg + str(count) + Ext)\r\n====\r\n<\/pre>\n<p>One thing that I noticed with my version was that the python script tanked after 142 images.\u00a0 To continue on, I changed the value for the count variable to 142 and re-ran the python script.\u00a0 Eventually it finished.\u00a0 I just want to make that clear for anyone reproduction the effort.<\/p>\n<p>This python script will create the file list needed for the image to video conversion.<\/p>\n<pre>==== FileList.py\r\n\r\ncount = 0\r\nDimg = 'file \\'\/home\/local\/Desktop\/DepthMap\/Depth\/Depth_Seg'\r\nExt = '.png\\''\r\ntext_file = open(\"\/home\/local\/Desktop\/DepthMap\/Depth\/Output.txt\", \"w\")\r\n# The values in range(x,y,z) are x=start, y=end, z=step\r\nfor N in range(1,453,1):\r\ncount = count + 1\r\ntext_file.write (Dimg + str(count) + Ext)\r\ntext_file.write ('\\n')\r\ntext_file.close()\r\n====<\/pre>\n<p>Now we can convert the image sequence into a video using this command.<\/p>\n<pre>ffmpeg -y -r 30 -f concat -safe 0 -i \"\/home\/local\/Desktop\/DepthMap\/Depth\/Output.txt\" -c:v libx264 -vf \"fps=30,format=yuv420p\" \"\/home\/local\/Desktop\/DepthMap\/Depth\/Depth_Seg.mov\"\r\n<\/pre>\n<p>The OpenCV process introduced a scale background which I would like to remove.\u00a0 After finding the cropping locations using GIMP, I ran this command.<\/p>\n<pre>ffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/Depth\/Depth_Seg.mov\" -filter:v \"crop=621:351:100:125\" -c:a copy \"\/home\/local\/Desktop\/DepthMap\/Depth_Seg_Crop.mov\"\r\n<\/pre>\n<p>To give a better presentation I decided to inlay the depthmap on top of the original left camera footage.\u00a0 I desaturated the original video to draw more attention to the depthmap inlay.<\/p>\n<pre>ffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/Left\/Left_Seg.MOV\" -vf \"eq=saturation=0\" \"\/home\/local\/Desktop\/DepthMap\/De-Saturate.mov\"\r\n<\/pre>\n<p>Then I inserted the depthmap as a picture in picture.<\/p>\n<pre>ffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/De-Saturate.mov\" -vf \"movie=\/home\/local\/Desktop\/DepthMap\/Depth_Seg_Crop.mov, scale=621:351 [vid2]; [in][vid2] overlay=main_w-overlay_w-20:main_h-overlay_h-20\" \"\/home\/local\/Desktop\/DepthMap\/De-Saturate_PicNPic.mov\"\r\n<\/pre>\n<p>Finally, I drew a red boarder around the depthmap.<\/p>\n<pre>ffmpeg -i \"\/home\/local\/Desktop\/DepthMap\/De-Saturate_PicNPic.mov\" -vf drawbox=x=1197:y=663:w=621:h=351:color=red@1 \"\/home\/local\/Desktop\/DepthMap\/De-Saturate_PicNPic_Final.mov\"\r\n<\/pre>\n<p>That completes the process for creating depthmaps using FFMpeg and OpenCV.\u00a0 The results are far from ideal.\u00a0 This is mainly a proof of concept using existing footage.\u00a0 The efforts of creating depthmaps are still maturing.\u00a0 One effort by\u00a0 University College London is producing some striking results.<\/p>\n<p><iframe loading=\"lazy\" title=\"Turning 2D into depth images\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/KNft4RFsK28?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Depthmaps are typically greyscale images that represent how far an object is by the degree of shade.\u00a0 They can be white-near, black-near, or some other color scale such as ironbar.\u00a0 Depthmaps are useful for creating renders or systems that have a distance sensing need. In this post I&#8217;ll be using footage that I shot using 2 Innovv cameras.\u00a0 The rig was not precise, so I also post processed it to correct distortion and alignment issues.\u00a0 The post processing topic was&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.cloudacm.com\/?p=3085\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9,3],"tags":[],"class_list":["post-3085","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-rd"],"_links":{"self":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/3085","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3085"}],"version-history":[{"count":16,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/3085\/revisions"}],"predecessor-version":[{"id":3101,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/3085\/revisions\/3101"}],"wp:attachment":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3085"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3085"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3085"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}