Dynamic Images

Dynamic Images

Introduction – There is more to the picture for the eye

In the last post we extracted data from images and processed it.  In this post we will cover the topic of image manipulation based on external data.  This is useful because it allows us to combine data sets into a single point of observation.  I’ll go through how to take an image and convert it based on variable conditions.  I’ll also cover how to overlay text data into an image.  Then I will cover the steps to upload the image to a web host.

We have a lot to cover and it will be a springboard to a greater topic of augmented reality.  Lets begin.

Purpose – Lets take a look at the use of this method

A few months back, while researching topics, I came across a site that featured a RPi thermostat.  The one item that caught my attention was the graphical interface on the thermostat.  The developer had a picture of the home being controlled and conditional readings were overlaid on that image.  The end result is an intuitive experience for those without any technical knowledge.  It shows how the RPi can be used to bridge technology and provide a meaningful purpose.  Although folks starting out in RPi development may not be ready to digest it, I feel that it is a prime example of what can be done.  This post will go along the lines of the work that Jeff has done.  It will give you the basic steps to achieve a graphical result that is similar.

Details – Getting it together so it looks like it should

The one thing that I would like to do in this post is to upload images to this blog using the RPi that represent real time conditions.  Here is an image of the 520 bridge with current traffic conditions.  It also contains the current date and time.  I’m also including the current temperature reading scrapped from NOAA.

First thing is first, I want to express how difficult it was to get a changed image to appear on this blog.  Originally I had intended to upload directly to this website.  For some strange reason the old images would remain, even after uploading a new image.  I suspect a cache proxy is to blame, but I am not privileged enough to know that for sure.  So after much used time, I decided to upload the images to another host and point my blog to that as the source image.  I just want to point that out before continuing on.

Now lets start with a base image.  In my example above, I used an image found on google using an image search term “new 520 bridge”.  It was used in a Seattle PI article about it.  With the base image selected, we can now brand it with some text using IM.

RPi has a narrow range of fonts to work with.  I had to find a way to list them and this site had a great suggestion.  I used this command to find out what I have.


convert -list font > fontlist.txt

I settled on this font for our example, “Nimbus-Mono-Bold-Oblique”.  This command will create the text image we will use to overlay on to the base image.


convert -size 360x80 xc:none -fill white -pointsize 35 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,6 'West bound SR-520'" statictext.png

Now with that we can create our first text branding of the image.  I used this command to do it.


composite -watermark 20% -geometry -0-70 -gravity center statictext.png 520_Bridge.jpg 520_Bridge_Text.png

Now is a good time to go through some of the code for each of the steps.  In the first convert command, we created a new image called “statictext.png”.  It is set with a Nimbus mono bold oblique font with a 35 pointsize.  We gave it a white fill color and a transparent background.  Then we set the image dimensions and positioned the text.

Next, we used that generated image and did a composite command.  Since we want to see the original image, I chose a watermark of 20%.  Then I positioned the watermark off set from the center, up 70 pixels.

Here is where the dynamic comes into play.  We’ll be adding a date and time stamp to our image so we can see that it is indeed current.  Here the first command to create the date layer image.


convert -size 93x20 xc:none -fill white -pointsize 15 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,2 '$(date +%Y' '%m' '%d)'" date.png

Now we are ready to create our time layer image using this command.


convert -size 125x22 xc:none -fill white -pointsize 20 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,2 'Time $(date +%H':'%M)'" time.png

With both of the time stamp images ready, we can now layer them on to the branded image.


composite -watermark 20% -gravity NorthWest -geometry +5+5 time.png 520_Bridge_Text.png 520_Bridge_Text_Time.png
composite -watermark 20% -gravity NorthWest -geometry +10+30 date.png 520_Bridge_Text_Time.png 520_Bridge_Text_TimeStamp.png

Now comes the real magic.  We’ll be scraping data from NOAA.  I found an excellent post that showed me the steps, which saved me an enormous amount of time.  Thank you Mike!  The process is to the point.  The bonus is the XML format that NOAA uses.  This allow easy polling of readings.  Here is the python script I’ll be using.


#!/usr/bin/env python
from lxml import etree
import urllib2
import os

url = 'http://w1.weather.gov/xml/current_obs/KSEA.xml'
fp = urllib2.urlopen(url)
doc = etree.parse(fp)
fp.close()

temp = doc.xpath("//current_observation/temp_f")[0].text
temp = temp[:-2]
temp_file = open("/home/pi/cacti_scripts/NOAA_KSEA_temp.txt", "w")
temp_file.write(temp)
temp_file.close()

rhum = doc.xpath("//current_observation/relative_humidity")[0].text
rhum_file = open("/home/pi/cacti_scripts/NOAA_KSEA_rhum.txt", "w")
rhum_file.write(rhum)
rhum_file.close()

wstrg = doc.xpath("//current_observation/wind_string")[0].text
wstrg_file = open("/home/pi/cacti_scripts/NOAA_KSEA_wstrg.txt", "w")
wstrg_file.write(wstrg)
wstrg_file.close()

vismi = doc.xpath("//current_observation/visibility_mi")[0].text
vismi_file = open("/home/pi/cacti_scripts/NOAA_KSEA_vismi.txt", "w")
vismi_file.write(vismi)
vismi_file.close()

The real beauty of this is we can get several readings to output as text files.  This is useful because we’ll use them as variables in our bash script later on.  Now for the last part, the traffic status.

In this last section I want to layer text indicating the traffic condition.  We will have 4 possible states.  This is based on the data we scraped from the WSDOT traffic image we used in the last post.  These will be clear, moderate, heavy, congested, or no data.  In order to bring these values over to our image, we’ll need to create conditions.  Luckly, the readings have already been made.  So all that needs to be done is read the text file containing the image color values.  Based on the value, we can create our layer image accordingly.  Here is the python script to do just that.


HEXReading_file = open("/home/pi/cacti_scripts/HEXReading.txt", "r")
head = HEXReading_file.read()
HEXReading_file.close()

if head in ('000000'):
    os.system("convert -size 360x80 xc:none -fill black -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Congested'' condition.png")
elif head in ('FF0000'):
    os.system("convert -size 360x80 xc:none -fill red -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Heavy'' condition.png")
elif head in ('FFFF00'):
    os.system("convert -size 360x80 xc:none -fill orange -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Moderate'' condition.png")
elif head in ('20E040'):
    os.system("convert -size 360x80 xc:none -fill green -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Clear'' condition.png")
else:
    os.system("convert -size 360x80 xc:none -fill grey -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'No Data'' condition.png")

Now all that is left is putting it all together with a shell script.  This is the best part because CRON can run this single script.


#!/bin/bash

# Scrape the temperature data from NOAA's website
python NOAA_KSEA.py

# Create the image layer that contains the temperature data from NOAA
noaaTemp=$(</home/pi/cacti_scripts/NOAA_KSEA_temp.txt)
convert -size 135x25 xc:none -fill black -pointsize 20 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 1,1 'Temp $noaaTemp F'" noaaTemp.png

# Create the static text image layer
convert -size 360×80 xc:none -fill black -pointsize 35 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,6 'West bound SR-520'" statictext.png

# Create the timestamp image layers
convert -size 93×20 xc:none -fill black -pointsize 15 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,2 '$(date +%Y' '%m' '%d)'" date.png
convert -size 125×22 xc:none -fill black -pointsize 20 -font Nimbus-Mono-Bold-Oblique -gravity center -draw "text 0,2 'Time $(date +%H':'%M)'" time.png

# Overlay the image layers on the baseline image
composite -watermark 20% -geometry -0-70 -gravity center statictext.png 520_Bridge.jpg 520_Bridge_Text.png
composite -watermark 20% -gravity NorthWest -geometry +5+5 time.png 520_Bridge_Text.png 520_Bridge_Time.png
composite -watermark 20% -gravity NorthWest -geometry +10+30 date.png 520_Bridge_Time.png 520_Bridge_TimeStamp.png
composite -watermark 20% -gravity NorthEast -geometry +5+5 noaaTemp.png 520_Bridge_TimeStamp.png 520_Bridge_Temp.png
composite -gravity center condition.png 520_Bridge_Temp.png 520_Bridge.png

# Upload the final image to the FTP host
wput -u -nc -B /home/pi/noaa/520_Bridge.png 'ftp://<ftpuser>:<ftppass>@<host:port>/<path>/520_Bridge.png'

You can clearly see that last command in the bash script uploads the file to a FTP host, that was easy.  The script makes a reference to a python script, here is the code for that.


#!/usr/bin/env python
from lxml import etree
import urllib2
import os

url = 'http://w1.weather.gov/xml/current_obs/KSEA.xml'
fp = urllib2.urlopen(url)
doc = etree.parse(fp)
fp.close()

temp = doc.xpath("//current_observation/temp_f")[0].text
temp = temp[:-2]
temp_file = open("/home/pi/cacti_scripts/NOAA_KSEA_temp.txt", "w")
temp_file.write(temp)
temp_file.close()

rhum = doc.xpath("//current_observation/relative_humidity")[0].text
rhum_file = open("/home/pi/cacti_scripts/NOAA_KSEA_rhum.txt", "w")
rhum_file.write(rhum)
rhum_file.close()

wstrg = doc.xpath("//current_observation/wind_string")[0].text
wstrg_file = open("/home/pi/cacti_scripts/NOAA_KSEA_wstrg.txt", "w")
wstrg_file.write(wstrg)
wstrg_file.close()

vismi = doc.xpath("//current_observation/visibility_mi")[0].text
vismi_file = open("/home/pi/cacti_scripts/NOAA_KSEA_vismi.txt", "w")
vismi_file.write(vismi)
vismi_file.close()

HEXReading_file = open("/home/pi/cacti_scripts/HEXReading.txt", "r")
head = HEXReading_file.read()
HEXReading_file.close()

if head in ('000000'):
    os.system("convert -size 360x80 xc:none -fill black -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Congested'' condition.png")
elif head in ('FF0000'):
    os.system("convert -size 360x80 xc:none -fill red -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Heavy'' condition.png")
elif head in ('FFFF00'):
    os.system("convert -size 360x80 xc:none -fill orange -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Moderate'' condition.png")
elif head in ('20E040'):
    os.system("convert -size 360x80 xc:none -fill green -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'Clear'' condition.png")
else:
    os.system("convert -size 360x80 xc:none -fill grey -pointsize 55 -font Nimbus-Mono-Bold-Oblique -gravity center -draw 'text 0,6 'No Data'' condition.png")

With the shell script calling the python script, we can do a lot of stuff behind the scenes.  The only thing left at this point is creating the CRON job, which you can reference how to do from my earlier post.  In Webmin, I entered this to do the trick.


bash /<path>/script.sh

That’s it!  Now I have the RPi processing new images at 5 minute intervals and posting them online.  This concludes this topic.

Relations – getting more from the RPI with scripts

I had a tremendous learning curve to face with this topic.  I was fortunate to find postings online that cleared up the subject for me.  If you are like me, then the topic of Python is a vast open expanse.  Here is a post with several instructional videos to help you make your way through the maze.

Summary – the complete picture

This topic covered the concepts of image manipulation using third party data.  We used ImageMagick and Python on the RPi to accomplish this.  Then we stepped through how the final image was uploaded to the internet.  The entire process is run from a script that is scheduled to occur every 5 minutes.

The material covered in this section will fundamental to future projects.  Having the know how to bring it all together will instrumental in up coming posts.

Comments are closed.