Sunday, March 11, 2012

Using a photo frame as second monitor [Updated]

Some computing scenarios would benefit from a second monitor, even a small one. A good example are HTPC, where info like what music, radio station, or TV channel is played is often shown on little 2-line LCD devices.

Photo frames offer so much more space - and at much higher resolution than even the biggest LCD devices - to display information. If one only could write to them.

The Samsung SPF-87H Digital Photo Frame is such a device. It can be connected to a computer and switched into a so called Mini-Monitor mode, allowing the computer to write to the frame. Samsung is offering a program called 'Frame Manager' for Windows, but nothing for Linux. Some attempts for Linux functionality have been made already, like here, here, and discussion here.

I am now offering a Python script, which can lock the frame into Mini-Monitor mode and send pictures to the frame. The script is very simple, has basically no error checking, but is heavily commented. It provides the basic functionality only, e.g. pictures must be pre-sized to what the frame can handle (800x480 pixel, width x height). To use the script, copy the content of the post pyframe_basic into a file pyframe_basic and make it executable (chmod a+x pyframe_basic).

An advanced version - not shown yet - will use the Python Imaging Library (PIL) to process pictures of any size and type to fit with the frame requirements, and could prepare pictures with textual information.

The photo frame unfortunately does not allow auto-connection. Go through these steps for manual connection:
  • Connect frame to computer with USB cable
  • Switch on the frame
  • A dialogue pops up on the frame, offering Mass Storage, Mini Monitor, and Photo Frame. Select Mini Monitor and press Select
  • Your welcome picture (see program code) will be shown

UPDATE 1: transfer speed evaluated
UPDATE 2: code for switching from Mass Storage mode to Mini Storage mode added
UPDATE 3: a program to send screenshots to the photo frame at video speeds, completely from within Python
UPDATE 4: a program which allows to send screenshots upon receiving a trigger signal
UPDATE 5:  A video recorded from a photoframe, showing a video playing on the photoframe

A video showing video on the Samsung photoframe

Using the Python programs from this site I demonstrate with a video recorded by digital camera from a Samsung SPF-87H Digital Photo Frame. The quality of the video shown here on the blog is awful in color and resolution, while on the photoframe itself both are excellent. But at least this clip shows that the video plays smoothly through all scenes.


The setup used a virtual frame buffer, so this setting can also be used for in a headless client. In a terminal give these commands :

Xvfb :99 -screen 0 800x480x16 &
DISPLAY=:99 ./videoframe &
DISPLAY=:99 mplayer -fs /path/to/bbbunny_720p_h264.mov
This creates a virtual frame buffer xserver as #99 with a screen resolution the same as the photoframe (800x480, change to match your frame if needed, here and also in the script ), and in it starts the Python videoframe script (see below), and uses mplayer to play a movie in full screen mode. This was then recorded with a digital camera from the photoframe, and uploaded to this post.

The videoframe script records some frame and transfer rates. Here is an excerpt from the final scenes (each line is an average over 50 frames, i.e. 2-3 seconds):
Frames per second: 18.68, Megabytes per second: 0.84
Frames per second: 17.93, Megabytes per second: 0.88
Frames per second: 17.80, Megabytes per second: 0.87
Frames per second: 17.78, Megabytes per second: 0.87
Frames per second: 17.97, Megabytes per second: 0.88
Frames per second: 18.02, Megabytes per second: 0.89
Frames per second: 17.89, Megabytes per second: 0.88
Frames per second: 17.96, Megabytes per second: 0.88
Frames per second: 15.99, Megabytes per second: 0.96
Frames per second: 17.12, Megabytes per second: 0.89
Frames per second: 16.23, Megabytes per second: 0.96
Frames per second: 16.66, Megabytes per second: 0.94
Frames per second: 17.87, Megabytes per second: 0.88
Frames per second: 17.79, Megabytes per second: 0.90
Frames per second: 16.01, Megabytes per second: 0.98
Frames per second: 19.45, Megabytes per second: 0.80
Frames per second: 22.04, Megabytes per second: 0.69
Frames per second: 22.18, Megabytes per second: 0.68
Frames per second: 21.53, Megabytes per second: 0.71
Frames per second: 18.49, Megabytes per second: 0.88
Frames per second: 18.18, Megabytes per second: 0.88
Frames per second: 17.78, Megabytes per second: 0.90
Frames per second: 18.45, Megabytes per second: 0.83
Frames per second: 19.07, Megabytes per second: 0.83
Frames per second: 17.95, Megabytes per second: 0.88
Frames per second: 18.36, Megabytes per second: 0.85
Frames per second: 20.55, Megabytes per second: 0.74
Frames per second: 21.25, Megabytes per second: 0.70
Frames per second: 20.64, Megabytes per second: 0.74
Depending on the complexity of the picture to be jpg coded, the observed variation in fps ranges from 11 ... 27fps. In this setup the cpu is a 6 year old Intel Core2 T7200 2.0GHz, running at ~50% load (its cpu-mark is 1150; for reference: today's Intel Core i5-2500 has a cpu-mark of 6750). As noticed earlier, the bottleneck appears to be the frame itself. The speed of movements within the movie scenes does NOT play a role for the transfer rate, as always a single screenshot is taken and processed. However, fine structures (grass, hair, fur,...) which make for big jpg files slow the frame rate down.

The python code for videoframe is shown below the line. See code in other posts below for more detailed comments on parts of the script.

Update:
the command:
pmap.save(buffer, 'jpeg')
is the same as:
pmap.save(buffer, 'jpeg', quality = -1)
which sets the quality to its default setting of 75. Quality ranges from 0 (=very poor) to 100 (=very good). The save command itself is not faster at poorer settings, but the resulting picture size is smaller, and thus transfer speed over the USB bus increases, allowing higher frame rates! Quality settings of 60 are usually good enough; certainly for video.

see reference in source code:
http://cep.xor.aps.anl.gov/software/qt4-x11-4.2.2-browser/d0/d0e/qjpeghandler_8cpp-source.html#l00897
00959         int quality = sourceQuality >= 0 ? qMin(sourceQuality,100) : 75;
/Update
_________________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: videoframe
#
# This videoframe program plays videos on the 'Samsung SPF-87H Digital Photo Frame'
# by taking rapid snapshots from a video playing on a screen and transfers them as jpeg
# pictures to the photo frame
#
# It is an application of the sshot2frame program found on the same
# website as this program
# Read that post to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core

# additional imports are required
from PyQt4 import QtGui, QtCore
import time

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "Could not find", device, " - using screen\n"
    frame = False
else:
    frame = True
    print "Found", device
    dev.ctrl_transfer(0xc0, 4 )  

app  = QtGui.QApplication(sys.argv)

fd = open("shot.log","a", 0)

# Enter into a loop to repeatedly take screenshots and send them to the frame
start  = time.time()
frames = 0
mbyte = 0

while True:
    # take a screenshot and store into a pixmap
    # the screen was set to 800x480, so it already matches the photoframe
    # dimensions, and no further processing is necessary
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
   
    # create a buffer object and store the pixmap in it as if it were a jpeg file   
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
   
    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()
   
    if not frame:
        print "no photoframe found; exiting"
        sys.exit()
    else:
        rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
        pad = 16384 - (len(rawdata) % 16384)
        tdata = rawdata + pad * b'\x00'
        tdata = tdata + b'\x00'
        endpoint = 0x02
        bytes_written = dev.write(endpoint, tdata )
        mbyte += bytes_written

    frames += 1  

    # write out info every 50 frames
    if frames % 50 == 0:
        runtime = time.time() -start
        fd.write("Frames per second: {0:0.2f}, Megabytes per second: {1:0.2f}\n".format( frames / runtime, mbyte/runtime /1000000.))
        start  = time.time()
        frames = 0
        mbyte = 0
     

Triggered Screenshots

When using a photoframe as a display or a (headless) PC, one might want to update the display at regular intervals, e.g. once per minute to update a clock, but then also at other events, like pressing a key on a keyboard or remote control.

This can be achieved by making the screenshot program listen to UNIX signals. These signals must not be mistaken for signals emitted from GUIs with events like clicking a button, or checking a checkbox. The probably best known of these UNIX signals is SIGINT, which is sent to a program when CTRL-C is pressed, and usually ends the program.

For user defined purposes the signals SIGUSR1 and SIGUSR2 (numerical codes 10 and 12, resp.) have been reserved. In the shell these signals can be send by
kill -SIGUSR1 pid-of-program-to-receive-signal
The tsshot2frame program below will listen to this signal and take a screenshot and send it to the frame when it receives it.

The triggershot program below is just a demo to show how a program can create and send such a signal. In this case the program does it when the key 't' is pressed. Obviously, other events can be used, like key presses on remote control, alarm signals from sensors, etc.
______________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: tsshot2frame
# based on sshot2frame, but allows to be triggered by a SIGUSR1 signal
#
# This triggered-screenshot-to-frame program takes a screenshot from your desktop
# and sends it to the 'Samsung SPF-87H Digital Photo Frame'
#
# The screenshots are taken at regular intervals, but can also be triggered randomly
# by a SIGUSR1 signal, to which this program is listening.
#
# It is an extension of the sshot2frame program found here:
#    http://pyframe.blogspot.com
# Read other posts to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core
import time
import signal
from PyQt4 import QtGui, QtCore


def takeshot():
    print "tsshot2frame: taking a shot"

    # take a screenshot and store into a pixmap
    #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
    # if you want a screenshot from only a subset of your desktop, you can define it like this
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=1200, height=720)

    # next code line is needed only when screenshot does not yet have the proper dimensions for the frame
    # note that distortion will result when aspect ratios of desktop and frame are different!
    # if not needed then inactivate to save cpu cycles
    pmap = pmap.scaled(800,480)

    # create a buffer object and store the pixmap in it as if it were a jpeg file
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
    buffer.close()

    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()

    # wrap pic into write format and write to frame
    rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
    pad = 16384 - (len(rawdata) % 16384)
    tdata = rawdata + pad * b'\x00'
    tdata = tdata + b'\x00'
    endpoint = 0x02
    bytes_written = dev.write(endpoint, tdata )


def sigusr1_handler(signum, stack):
    """
    Dummy handler for SIGUSR1 signal.
    """
    pass
    #print "tsshot2frame: sigusr1_handler received signal no:", signum

    # Receiving a signal will interrupt the time.sleep() in the main while loop,
    # which will result in a shot being taken immediatel<. Therefor a separate
    # takeshot() is not needed here; it would result in two successive shots
    # being taken
    #takeshot()


#----- main starts here ------------------------------

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "tsshot2frame: Could not find device", device, " - exiting\n"
    sys.exit()
else:
    print "tsshot2frame: Found device", device
    dev.ctrl_transfer(0xc0, 4 )

# Setting the signal handler
signal.signal(signal.SIGUSR1, sigusr1_handler)

# Must have a QApplication running to use the other pyqt4 functions
app  = QtGui.QApplication(sys.argv)

# Take screenshots in regular intervals and send them to the frame;
# screenshots triggered by SIGNALS will come in addition
while True:
    print time.time(),
    takeshot()
    time.sleep(60)
    """
    Remember that receiving a SIGNAL will interrupt time.sleep !
    From the python documentation:
    time.sleep(secs)
    Suspend execution for the given number of seconds. The argument may be a
    floating point number to indicate a more precise sleep time. The actual
    suspension time may be less than that requested because any caught signal
    will terminate the sleep() following execution of that signal’s catching
    routine. Also, the suspension time may be longer than requested by an
    arbitrary amount because of the scheduling of other activity in the system.
    """
Following is the triggershot program:
_______________________________________________________________________________
#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: triggershot
# sends the SIGUSR1 signal (numerical value 10) to the
# script tsshot2frame when keypress detected
# Copyright (C) ullix

import time
import signal
import os
import sys
import subprocess
import pygame
import termios
import fcntl

from PyQt4 import QtGui, QtCore


def triggersignal():
    """
    find the pid of our triggered-screen-shot program and send a
    SIGUSR1 to it
    """
    script = "tsshot2frame"

    print time.time(),"trigger: sending SIGUSR1 to ", script

    # execute shell command 'ps -A | grep tsshot2frame' and obtain its output
    p1 = subprocess.Popen(["ps", "-A"], stdout=subprocess.PIPE)
    p2 = subprocess.Popen(["grep", script], stdin=p1.stdout, stdout=subprocess.PIPE)
    output = p2.communicate()[0]
    #print "pipe outsub=",output

    if script in output and '<defunct>' not in output:
        pid = int(output[0:5])
        #print script + " is running, pid: ", pid

    else:
        if '<defunct>' in output:
            #print script + " running but defunct, clear up first"
            os.system("killall " + script ) # clear up if defunct
        else:
            #print script + " not running"
            pass

        pid = subprocess.Popen("./" + script  ).pid
        #pid = subprocess.Popen(script).pid # if script is in path
        time.sleep(2) # give it time to start
        #print script + " restarted, pid: ", pid

    os.kill(pid, signal.SIGUSR1)


def getch():
    # code according to:
    # http://docs.python.org/faq/library#how-do-i-get-a-single-keypress-at-a-time
    fd = sys.stdin.fileno()
    oldterm = termios.tcgetattr(fd)
    newattr = termios.tcgetattr(fd)
    newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
    termios.tcsetattr(fd, termios.TCSANOW, newattr)

    oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
    fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

    c = ""
    try:
        while True:
            # read from stdin as long as there are characters to be read
            # if all read then return
            try:
                c += sys.stdin.read(1)
            except IOError as (errno, msg):
                #print "IOError", errno, msg,
                break
    finally:
        # restore old settings
        termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
        fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

    return c

#----- main starts here ------------------------------

triggersignal()
while True:
    time.sleep(0.3)
    c = getch()
    if "t" in c :
        print "Read character t, triggering screenshot"
        triggersignal()

Monday, March 5, 2012

sshot2frame - send screenshots to photoframe at video speed

#!/usr/bin/python
# -*- coding: UTF-8 -*-

# Program: sshot2frame
#
# This screenshot-to-frame program takes a screenshot from your desktop
# and sends it to the  'Samsung SPF-87H Digital Photo Frame'
#
# This can be done at frame rates of 20+ fps so that it is even possible
# to watch video on the frame, when video is playing on the desktop!
# (tested with mythtv)
#
# It is an extension of the pyframe_basic program found here:
#    http://pyframe.blogspot.com/2011/12/pyframebasic-program_15.html
# Read that post to understand details not commented here
# Copyright (C) ullix

import sys
import struct
import usb.core

# additional imports are required
from PyQt4 import QtGui, QtCore
import Image
import StringIO
import time

device = "SPF87H Mini Monitor"
dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "Could not find", device, " - using screen\n"
    frame = False
else:
    frame = True
    print "Found", device
    dev.ctrl_transfer(0xc0, 4 )  


# Must have a QApplication running to use the other pyqt4 functions
app  = QtGui.QApplication(sys.argv)

# Enter into a loop to repeatedly take screenshots and send them to the frame
start  = time.time()
frames = 0
while True:
    # take a screenshot and store into a pixmap
    pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId())
   
    # if you want a screenshot from only a subset of your desktop, you can define it like this
    #pmap = QtGui.QPixmap.grabWindow(QtGui.QApplication.desktop().winId(), x=0, y= 600, width=800, height=480)

    # next line is needed only when screenshot does not yet have the proper dimensions for the frame
    # note that distortion will result when aspect ratios of desktop and frame are different!
    # if not needed then inactivate to save cpu cycles
    pmap = pmap.scaled(800,480)
   
    # if desired, save the pixmap into a jpg file on disk. Not required here
    #pmap.save(filename , 'jpeg')

    # create a buffer object and store the pixmap in it as if it were a jpeg file   
    buffer = QtCore.QBuffer()
    buffer.open(QtCore.QIODevice.WriteOnly)
    pmap.save(buffer, 'jpeg')
   
    # now get the just saved "file" data into a string, which we will send to the frame
    pic = buffer.data().__str__()
   
    ######################   
    # this code within ########## is needed only to create an PIL Image object to be shown below
    # by image.show(), e.g. for debugging purposes when no frame is present
   
    #picfile = StringIO.StringIO(pic)            # stringIO creates a file in memory
    #im1=Image.open(picfile)   
    #im = im1.resize((800,480), Image.ANTIALIAS) # resizing not needed when screenshot already has the right size
                                                 # note that distortion will result when aspect ratios of desktop
                                                 # and frame are different!
    #picfile.close()
    ######################
   
    if not frame:
        # remember to activate above ########### lines if you use im.show() command
        im.show()       
    else:
        rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
        pad = 16384 - (len(rawdata) % 16384)
        tdata = rawdata + pad * b'\x00'
        tdata = tdata + b'\x00'
        endpoint = 0x02
        bytes_written = dev.write(endpoint, tdata )

    frames += 1  

    # exit the while loop after some cycles, or remove code to get indefinite loop
    if frames > 100:
        break;

    # set time delay between screenshots in seconds. The frame can handle some 20+fps,
    # so 0.1sec (i.e. max of 10fps) is ok for the frame but possibly too fast for a slow cpu
    #time.sleep(0.1)
   
runtime = time.time() -start
print "Frames per second: {0:0.2f}".format( frames / runtime)      

Thursday, January 12, 2012

Code to switch the frame from Mass Storage mode to Mini Monitor mode

A picture can only be written to the photo frame when it is in Mini Monitor mode (and it is INITialized). However, when it is found in Mass Monitor mode, it can be switched to Mini Monitor mode by a script. The code reqired is:
dev.ctrl_transfer(0x00|0x80,  0x06, 0xfe, 0xfe, 0xfe )
Settling on the USB bus takes <0.5sec, but give it some extra time.

A stripped down version of a program which takes care of switching and initialization and can be fed with pictures of any size and (almost any) type is following. Note the use of the Image module for image manipulation, and of StringIO to avoid writing and reading temp files to/from disk:

#!/usr/bin/python
# -*- coding: UTF-8 -*-

import os
import sys
import time
import usb.core
import usb.util
import StringIO
import Image
import struct

def write_jpg2frame(dev, pic):
    """Attach header to picture, pad with zeros if necessary, and send to frame"""
    # create header and stack before picture
    # middle 4 bytes have size of picture
    rawdata = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic)) + b"\x48\x00\x00\x00" + pic
    # total transfers must be complete chunks of 16384  = 2 ^14. Complete by padding with zeros
    pad = 16384 - (len(rawdata) % 16384) +1         
    tdata = rawdata + pad * b'\x00'
    ltdata = len(tdata)
    # Syntax: write(self, endpoint, data, interface = None, timeout = None):
    endpoint = 0x02               
    dev.write(endpoint, tdata )
   

def get_known_devices():
    """Return a dict of photo frames"""
    # listed as: Name, idVendor, idProduct, [width , height - in pixel if applicable]
    #
    # Samsung SPF-87H in either mini monitor mode or mass storage mode
    SPF87H_MiniMon   = {'name':"SPF87H Mini Monitor", 'idVendor':0x04e8, 'idProduct':0x2034, 'width':800, 'height':480 }
    SPF87H_MassSto   = {'name':"SPF87H Mass Storage", 'idVendor':0x04e8, 'idProduct':0x2033}
   
    # Samsung SPF-107H (data from web reports - not tested)
    SPF107H_MiniMon  = {'name':"SPF107H Mini Monitor", 'idVendor':0x04e8, 'idProduct':0x2036, 'width':1024, 'height':600 }
    SPF107H_MassSto  = {'name':"SPF107H Mass Storage", 'idVendor':0x04e8, 'idProduct':0x2035}
   
    # Samsung SPF-83H (data from web reports - not tested)
    SPF107H_MiniMon  = {'name':"SPF107H Mini Monitor", 'idVendor':0x04e8, 'idProduct':0x200d, 'width':800, 'height':600 }
    SPF107H_MassSto  = {'name':"SPF107H Mass Storage", 'idVendor':0x04e8, 'idProduct':0x200c}
   
    return    ( SPF87H_MiniMon, SPF87H_MassSto, SPF107H_MiniMon, SPF107H_MassSto, SPF107H_MiniMon, SPF107H_MassSto )
 

def find_device(device):
    """Try to find device on USB bus."""
    return usb.core.find(idVendor=device['idVendor'], idProduct=device['idProduct'])   


def init_device(device0, device1):
    """First try Mini Monitor mode, then Mass storage mode"""
    dev = find_device(device0)
 
    if dev is not None:
        ## found it, trying to init it
        frame_init(dev)
    else:
        # not found device in Mini Monitor mode, trying to find it in Mass Storage mode
        dev = find_device(device1)
        if dev is not None:
            #found it in Mass Storage, trying to switch to Mini Monitor
            frame_switch(dev)
            ts = time.time()
            while True:
                # may need to burn some time
                dev = find_device(device0)
                if dev is not None:
                    #switching successful
                    break
                elif time.time() - ts > 2:
                    print "switching failed. Ending program"
                    sys.exit()
            frame_init(dev)
        else:
            print "Could not find frame in either mode"
            sys.exit()
    return dev

  
def frame_init(dev):
    """Init device so it stays in Mini Monitor mode"""
    # this is the minimum required to keep the frame in Mini Monitor mode!!!
    dev.ctrl_transfer(0xc0, 4 )   
 

def frame_switch(dev):
    """Switch device from Mass Storage to Mini Monitor""" 
    dev.ctrl_transfer(0x00|0x80,  0x06, 0xfe, 0xfe, 0xfe )
    # settling of the bus and frame takes about 0.42 sec
    # give it some extra time, but then still make sure it has settled
    time.sleep(1)

   
def main():
    global dev, known_devices_list
   
    known_devices_list = get_known_devices()

    # define which frame to use, here use Samsung SPF-87H
    device0 = known_devices_list[0] # Mini Monitor mode
    device1 = known_devices_list[1] # Mass Storage mode

    dev = init_device(device0, device1)   
    print "Frame is in Mini Monitor mode and initialized. Sending pictures now"

    image = Image.open("mypicture.jpg")
    #manipulations to consider:
    #  convert
    #  thumbnail
    #  rotate
    #  crop
    image = image.resize((800,480))
    output = StringIO.StringIO()
    image.save(output, "JPEG", quality=94)
    pic  = output.getvalue()
    output.close()
    write_jpg2frame(dev, pic)       
      

if __name__ == "__main__":
    main()

Wednesday, January 11, 2012

pyframe transfer speed sufficient even for video

I was wondering about the transfer speed of pictures to the frame, given that Python is an interpreted language. It turned out to be much faster than expected:

Two very different pictures were loaded by the script, prepared, stored in memory and alternatively transferred to the frame. I used one pair of pictures which were simple and small (<<16384 Bytes), and another one with rather complex and hence larger (ca 100kB after resizing to 800x480) pictures. All are attached to this post. The transfer of the pics resulted in a CPU load of only about 1-2%. Here the measured data:

Picture_Pair_______ Picture Size (B) _______Pictures/sec ___Total Transfer MB/sec
red.jpg/blue.jpg ____ 6631/2536 __________ 28 _____________ 0.46
i244,jpg/i247.jpg ___ 98792/123768 ______ 16 _____________ 1.93

Since a minimum chunk size of 16384 bytes per picture needs to be transferred irrespective of the picture size, the small pictures do not benefit much from their small size with respect to transfer speed. Generally, 20+ Pictures/sec should be achievable.

Since the USB bus can transfer at least 10x as much, and the CPU even more, I conclude that the transfer speed is limited by the frame.

Next I took a video clip and converted each frame into a JPG picture using ffmpeg, which I then tried to transfer to the frame as a fast sequence. Suprisingly, I could not transfer even a single picture out of several hundred, although each picture could be viewed correctly with all photoview programs on my computer. I tried a variety of permutations of the ffmpeg parameters, but without success.

However, reading and rewriting each picture using the Python IMAGE Modul and this code resulted in pictures fit for transfer to the frame:
import Image
filename = "inpicture.jpg" 
image = Image.open(filename)
image.save( "outpicture", "JPEG", quality=95)
Transfering those pictures was possible with a flicker-free frame rate of some 26 pictures/sec (1.7 MB/sec) - the "picture" frame turned into a "video" frame! Reading the pictures from a fast SSD did not improve the speed, which is consistent with the frame itself being the bottleneck.


The test script is this:
#!/usr/bin/python

import sys
import os
import struct
import usb.core
import time

device = "SPF87H Mini Monitor"

dev = usb.core.find(idVendor=0x04e8, idProduct=0x2034)

if dev is None:
    print "Could not find", device, " - Exiting\n"
    sys.exit()

print "Found", device

dev.ctrl_transfer(0xc0, 4 )   

if len(sys.argv) < 3:
    print "I need 2 pictures  - Exiting."
    sys.exit()

filename1 = sys.argv[1]
filename2 = sys.argv[2]
filesize1 = os.path.getsize(filename1)
filesize2 = os.path.getsize(filename2)
print "Pictures to show are:", filename1, "filesize:", filesize1
print "Pictures to show are:", filename2, "filesize:", filesize2

# Open the picture file and read into a string
infile1 = open(filename1, "rb")
pic1 = infile1.read()
infile1.close()

infile2 = open(filename2, "rb")
pic2 = infile2.read()
infile2.close()

# The photo frame expects a header of 12 bytes, followed by the picture data.
# The first 4 and the last 4 bytes are always the same.
# The middle 4 bytes are the picture size (excluding the header) with the least significant byte first
rawdata1 = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic1)) + b"\x48\x00\x00\x00" + pic1
rawdata2 = b"\xa5\x5a\x18\x04" + struct.pack('<I', len(pic2)) + b"\x48\x00\x00\x00" + pic2

# The photo frame expects transfers in complete chunks of 16384 bytes (=2^14 bytes).
# If last chunk of rawdata is not complete, then make it complete by padding with zeros.
pad1 = 16384 - (len(rawdata1) % 16384)
tdata1 = rawdata1 + pad1 * b'\x00'

pad2 = 16384 - (len(rawdata2) % 16384)
tdata2 = rawdata2 + pad2 * b'\x00'

# For unknown reasons, some pictures will only transfer successfully, when at least one
# additional zero byte is added. Possibly a firmware bug of the frame?
#tdata1 = tdata1 + b'\x00'
#tdata2 = tdata2 + b'\x00'

# Write the data. Must write to USB endpoint 2
endpoint = 0x02

bytes_written1 = 0
bytes_written2 = 0

ts = time.time()
nr = 100
for i in range(nr):
    bytes_written1 += dev.write(endpoint, tdata1 )
    bytes_written2 += dev.write(endpoint, tdata2 )  
   
te = time.time()

sum = bytes_written1 + bytes_written2
td = te -ts
print "time lapsed writing:", td, "sec"

print "total no of pictures transferred:", nr * 2, ", rate: ", "%02.1f Bilder/sec"% (nr * 2 / td)
print "total no of bytes transferred:", sum, ", rate:",  "%03d kB/sec"%(sum/td/1000.)

sumfs = nr * (filesize1 + filesize2)
print "transfer overhead: %3d%% " % ((100.*sum/sumfs) - 100.)
The test pictures follow, each one 800x480