{"id":2523,"date":"2016-09-05T00:00:05","date_gmt":"2016-09-05T07:00:05","guid":{"rendered":"http:\/\/192.168.3.4\/?p=2523"},"modified":"2018-01-09T06:51:14","modified_gmt":"2018-01-09T14:51:14","slug":"multimedia-meta-data-more-than-meets-the-eye","status":"publish","type":"post","link":"https:\/\/www.cloudacm.com\/?p=2523","title":{"rendered":"Multimedia Meta-data \u2013 More than meets the eye"},"content":{"rendered":"<p>It is time to step back from system administrative tasks and move back into the subject of digitized multimedia data processing.\u00a0 The subject is a vast ever evolving beast.\u00a0 Multimedia basically consists of some combination of visual or auditory information.\u00a0 With digitized multimedia, additional data can be layered which provides any number of uses.<\/p>\n<p>In this post, I&#8217;ll be focusing on image file meta-data.\u00a0 There are several levels and flavors of meta-data which I won&#8217;t cover in detail here.\u00a0 However, I&#8217;ll give some basics about device information, Geo-tagging, and depth-maps.<\/p>\n<p>A good way to think of meta-data is to compare it to a library card catalog.\u00a0 Library books contain written media that have a large quantity of information.\u00a0 The card catalog has basic information about the book, such as author, publish date, location, etc.\u00a0 Essentially, this is how meta-data came to be.\u00a0 As libraries became digitized, the paper card catalog transformed from index cards to meta-data on a database.<\/p>\n<p>The purpose of meta-data is to provide an efficient way of finding relevant information and resources.\u00a0 It does this by logically organizing the data by means of identification.\u00a0 Lets see how this applies to multimedia meta-data, specifically digital image meta-data .<\/p>\n<p>If you were to be handed a photograph by itself, you would be limited in what you knew about the picture.\u00a0 In contrast, if the photograph had a caption hand written on the back of it \u201cRockaway Beach, 1972\u201d, now you would know when and were.<\/p>\n<p>Digital image meta-data operates under the same premise.\u00a0 However, it isn&#8217;t limited to the \u201chuman based\u201d details that might be hand written on the back of the photograph.\u00a0 It&#8217;s not uncommon for details to contain how large the picture is, the color depth, the image resolution, when the image was created, or the shutter speed.\u00a0 It would be tedious and just plain silly to do this manually.\u00a0 With digitized images, the meta-data is created automatically.\u00a0 The level of information only depends on what stage the images are created or modified.<\/p>\n<p><a href=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-2529\" src=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig3.png\" alt=\"intel-realsense-depth-enhanced-photography-fig3\" width=\"300\" height=\"488\" \/><\/a><\/p>\n<p>Lets take out Rockaway Beach photograph again, only this time it was taken with a digital camera.\u00a0 When the camera created to image, it also created meta-data about the image.\u00a0 Some of the elements of that meta-data are the Geo-tag and time stamp.\u00a0 Instead of handwriting it, it is now embedded as a layer in the image file itself.<\/p>\n<p>Opening the image file on a computer, I can see that it was taken at Rockaway Beach this year.\u00a0 However, the detail is even greater than that.\u00a0 The Geo-tag tells me were on Rockaway Beach, typically within 25 meters.\u00a0 Meanwhile, the time stamp tells me down to the millisecond when the picture was taken.\u00a0 This is just ridiculous detail, but it&#8217;s there anyway because the digitized process does it automatically and effortlessly.<\/p>\n<p>With the meta-data, I can also see what device took the photograph and several settings that photographers would understand.\u00a0 These lend a hand to post processing and image enhancements later.\u00a0 The meta-data can be used to automatically post process the image for the best visual appearance.\u00a0 That is significant.<\/p>\n<p>The nit and grit of meta-data is it doesn&#8217;t stop with just those elements.\u00a0 There are consortiums that are constantly defining, refining, and ratifying the standards that meta-data is built on.\u00a0 One such element is the depth-map.<\/p>\n<p><a href=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-2525\" src=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig1.jpg\" alt=\"intel-realsense-depth-enhanced-photography-fig1\" width=\"400\" height=\"225\" \/><\/a><a href=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig2.png.jpeg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-2526\" src=\"http:\/\/192.168.3.4\/wp-content\/uploads\/2016\/09\/intel-realsense-depth-enhanced-photography-fig2.png.jpeg\" alt=\"intel-realsense-depth-enhanced-photography-fig2.png\" width=\"400\" height=\"224\" \/><\/a><\/p>\n<p>A depth-map is an image layer that contains a gray-scale image with values the represent distance.\u00a0 The distance reference point is the observer, typically the camera that took the photograph.\u00a0 The varying shades of gray indicate how far or close a pixel is from the camera.\u00a0 The layer is a component of a standard known as the eXtensible Device Meta-data, or XDM specification.\u00a0 It is the result of a collaborative effort by Intel and Google starting in 2014.\u00a0 As of version 1.01, XDM supports a variety of use cases including Depth, VR, and 360 photography.<\/p>\n<p>It might not be apparent, but the XDM spec isn&#8217;t just a meta-data value for depth-map, but a whole host of meta-data values that provide a large variety of values.\u00a0 With these values, the flat 2 dimensional image can be rendered with a greater degree of detail.\u00a0 The higher detail allows for more processing options.\u00a0 To put it simply, the image can now be analyzed, malipulated, and rendered in ways a traditional image can not.\u00a0 This is the real magic behind depth-map meta-data.<\/p>\n<p>For more details about the XDM specification, reference it from the XDM.org website, <a href=\"http:\/\/www.xdm.org\">http:\/\/www.xdm.org<\/a><\/p>\n<p>Traditional images are evolving into dynamic datasets that go far beyond the initial intent of cataloging and organizing.\u00a0 Embedded information in the image not only lets us know details about the image source, but gives us the ability to process the image.\u00a0 The term, \u201cmore than meets the eye\u201d has never been truer in the field of digitized photography.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>It is time to step back from system administrative tasks and move back into the subject of digitized multimedia data processing.\u00a0 The subject is a vast ever evolving beast.\u00a0 Multimedia basically consists of some combination of visual or auditory information.\u00a0 With digitized multimedia, additional data can be layered which provides any number of uses. In this post, I&#8217;ll be focusing on image file meta-data.\u00a0 There are several levels and flavors of meta-data which I won&#8217;t cover in detail here.\u00a0 However,&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.cloudacm.com\/?p=2523\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9,10,3],"tags":[],"class_list":["post-2523","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-data-mining","category-rd"],"_links":{"self":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/2523","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2523"}],"version-history":[{"count":2,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/2523\/revisions"}],"predecessor-version":[{"id":2525,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=\/wp\/v2\/posts\/2523\/revisions\/2525"}],"wp:attachment":[{"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2523"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2523"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cloudacm.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2523"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}