Piping numpy arrays to other processes in python

To pipe data from one process to another as a stream in python we need to pickle the object and pass it to the pipe stream.
In this example I’ve used Numpy arrays but this could be applied to any object that can be pickled in Python.
This took far too long to get working and I could find little information online on how to put it all together so here it is.
This code is Python 3 only, I’ve only run this on a Mac.

I’ve used binary as the stream rather than text purley becuase of effiencies. Numpy arrays can get huge! This means readline() is not
going to work. Instead, I send a single control byte , 1, for data and 0 for stop. This could be extended to include other control operations.
I then send the length of the data as a 8 byte int, followed by the data itself.

simpleSend.py

import numpy as np
import pickle
import sys
import io
import time

#define some control bytes
control_data=bytes([1])
control_stop=bytes([0])

def send_data(arr):
    dataStr=pickle.dumps(arr)  #pickle the data array into a byte array
    dlen=len(dataStr).to_bytes(8, byteorder='big') #find the length of the array and
    print(control_data.decode('latin-1'),end='',flush=True)  #convert this to a byte array
    print(dlen.decode('latin-1'), end='', flush=True)   #encode the data and write it
    print(dataStr.decode('latin-1'), end='', flush=True)  # end='' will remove that extra \r\n

def send_stop():
    print(control_stop.decode('latin-1'), end='', flush=True) 

#set the stdout such that it prints in latin-1,   sys.stdout.detach() is a binary stream
sys.stdout = io.TextIOWrapper(sys.stdout.detach(), encoding='latin-1')

for p in range(10):
    arr=np.ones((5000,500))*p  #generate some data
    send_data(arr)
    #the sleep is purely for testing and can be removed, ie does the reader fall over after a long delay
    time.sleep(.1)
send_stop()        

simpleReceiver.py

import numpy as np
import sys
import pickle

#define some control bytes
control_data=bytes([1])
control_stop=bytes([0])

while True:
    data=sys.stdin.buffer.read(1)   #read the control byte
    if data==control_data:
        data=sys.stdin.buffer.read(8)  #read the data length
        dlen=int.from_bytes(data, byteorder='big')
        print('data lenght %d'%dlen)        
        data=sys.stdin.buffer.read(dlen) #read the data        
        npd=pickle.loads(data)  #unpickle
        print(npd.shape)
        print(npd.max())
    elif data==control_stop:
        print('stopped')
        break
    else:
        print('Oh no')

to run this
python simpleSend.py | python simpleReceiver.py

If we want to use Python’s subprocess module to start simpleReceiver.py we basically need to write to the STDIN instead of print

import numpy as np
import pickle
import sys
import subprocess as sp

#define some control bytes
control_data=bytes([1])
control_stop=bytes([0])

def send_data(arr,buff):
    dataStr=pickle.dumps(arr)  #pickle the data array into a byte array
    dlen=len(dataStr).to_bytes(8, byteorder='big') #find the length of the array and
    mp.stdin.write(control_data)
    mp.stdin.write(dlen)
    mp.stdin.write(dataStr)
    mp.stdin.flush() #not sure this needed
     
def send_stop(mp):
    mp.stdin.write(control_stop)
    mp.stdin.flush()
     
try:
    mp = sp.Popen("python3 simpleReceiver.py",  shell = True,stdin=sp.PIPE)   
except sp.CalledProcessError as err:
    print('ERROR:', err)
    sys.exit(-1)

for p in range(10):
    arr=np.ones((5000,5000))*p  #generate some data
    send_data(arr,mp)
send_stop(mp)        

With such a large array 5000×5000 this takes sometime. Running it through the python profiler indicates about 75% of the time is taken by pickle.dumps and most of the rest of the remaining 25% is taken by the write operation. Numpy’s own method gives a speed increase. Replacing dataStr=pickle.dumps(arr) with dataStr=arr.tobytes() and npd=pickle.loads(data) with npd=np.frombuffer(data) more than halves the time taken but lose the shape and dtype information. This would have to be sent along with the data.

view raw

numpyPipe.md

hosted with ❤ by GitHub

Finding webcams

List all devices on windows using ffmpeg

ffmpeg -list_devices true -f dshow -i dummy

and on Linux

v4l2-ctl –list-devices

To get the device capabilites

ffmpeg -f dshow -list_options true -i video="Mobius"

where “Mobius” is the name of the camera.

On the Mac use

ffmpeg -f avfoundation -list_devices true -i ""

How to write lossless video in Python

OpenCV does a reasonable job of reading videos from file or webcams. It’s simple and mostly works. When it comes to writing videos,
it however leaves a lot to be desired. There is little control over the codecs and it is almost impossible to know which codecs are installed. It also wants to know things like the frame size at intailisation. This isn’t always a problem, but if you don’t know it yet it means you have to set up the video writer inside your main processing loop.

To make something as cross-platform compatible as possible it would be nice to use FFmpeg. There are a few python wrappers around, but as far as I can tell they are mainly used for transcoding type applications. One solution is run FFmpeg as a subprocess and set its input to accept a pipe. Then every video frame is passed through the pipe. You write this yourself, in fact it’s only a few lines of code. However, the scikit-video package will do this for us, with some nice boilerplate to make life easier.

The steps are:

  1. install FFmpeg — if you running on Linux use your system’s package manager if it’s not already installed. If you’re unlucky enough to be using Windows you need to download the zip file from here, and add the bin directory to your system’s path.
  2. install scikit-video –I tried installing scikit-video via pip on my Anaconda distro but the version was too old. Instead, I cloned the github version and installed that. Instructions are provided on github.

Below is a simple example the grabs from your webcam and records lossless video.

#test recording of video
import cv2
import skvideo.io


capture=cv2.VideoCapture(0) #open the default webcam
outputfile = "test.mp4"   #our output filename
writer = skvideo.io.FFmpegWriter(outputfile, outputdict={
  '-vcodec': 'libx264',  #use the h.264 codec
  '-crf': '0',           #set the constant rate factor to 0, which is lossless
  '-preset':'veryslow'   #the slower the better compression, in princple, try 
                         #other options see https://trac.ffmpeg.org/wiki/Encode/H.264
}) 
while True:
    ret,frame=capture.read()
    if ret==False:
        print("Bad frame")
        break
    cv2.imshow('display',frame)
    writer.writeFrame(frame[:,:,::-1])  #write the frame as RGB not BGR
    ret=cv2.waitKey(10)
    if ret==27: #esc
        break

writer.close() #close the writer
capture.release()
cv2.destroyAllWindows()

Reskinning Max2play

Reskinning Max2play

Max2play is a great way to have an out of the box music server running on a Raspberry Pi with a touch screen. I’ve installed Max2play on a Raspberry Pi 3 and 7 inch touch screen. The only problem with it is that I would like to run other software on the Pi as well (such as control my Phillips Hue lighting). Out the box, the Pi now boots up and runs the full screen Jivelite app which controls the music.  There are two options I could think of: 1. Write a plugin for Jivelite to control the Hue lights, or 2, control the lighting with a separate app and have the ability to launch jivelite manually. I went for 2 since Jivelite is written in Lua which I don’t know.

The first problem is how to launch Jivelite manually. For this I added a Jivelite icon to the desktop.  I’m assuming Max2play is installed with Jivelite options and everything is working.

In the Jivelite plugin settings, disable autostart. In the Settings/Reboot tab, enable Autostart desktop (the desktop is normally started with the jivelite plugin so we need to enable it here).

In the Pi’s file manager go to Edit/Preferences – enable ‘Open files with Single click’ – it’s not easy double clicking with the touch screen. You might also want to increase the icon sizes whilst your there. These setting affect the desktop as well.

SSH into the Raspberry pi – either from the terminal or via the web plugin you can install. Go to the desktop folder and create a desktop launch shortcut. We will start by making one to launch jivelite. So create a text file with nano jivelite.desktop and add

[Desktop Entry]

Type=Application

Icon=/home/pi/music.png  <edit this to point to your own icon>

Name=Jivelite

Comment=Start the Jivelite music player

Exec=/opt/jivelite/jivelite/bin/jivelite

Save the file and that’s it. You should be now be able to launch jivelite from the desktop icon. The quit button does not quit the application (it seems to stop the music), so you have to quit from the menu options on Jivelite. (There appears to be a patch file that creates this behavior but I’ll investigate that at a latter date.)

To autostart at login, make a symbolic link to the .desktop file and place it in ~/.config/autostart

You may want to disable the screensaver which blanks the screen after 10 minutes as is. Remove the @xscreensaver -no-splash line from /etc/xdg/lxsession/LXDE/autostart and from ~/.config/lxsession/LXDE/autostart Stopping the screensaver server has no effect. I’m not sure what causes the screen to blank when running the GUI. It might be power saving in X. To disable the screensaver, the easiest method is to install a screensaver client and configure that not to run.

sudo apt-get install xscreensaver

After that’s completed there will be a screensaver option in the LXDE GUI menu. Run that and disable the screensaver from there.

 

Connecting an LED to a single board computer

Just a few notes on powering LEDs from single board computers. I’ve just started playing with an Adafruit Feather Huzzah which has just turned up with with a starting kit containing LEDs, switches, resistors and interestingly no wires – but I suppose I’ve got plenty of those around.

The Huzzah powered by an ESP8266 which is actually a wi-fi chip with a full TCP/IP stack and integrate mirco-controller that can be programmed via PlatformIO or the Arduino IDE. It’s only 80MHz but has 4Mb of flash, 9 GPIO pins and a single 1V max ADC. The chip is 3.3V and max current per GPIO is only 12mA.

Many of the GPIO are dual purposed. #0, #2, #15 and #16 are used for boot-mode detection and boot loading. I would avoid these unless really needed.

That leaves #4,#5,#12,#13.

In the starter kit there is a red LED (1.85-2.5V forward voltage, at 20mV current). The longer of the wires is the anode (+ve).

LEDs are current controlled devices so if you just wire them to voltage source (as in a GPIO pin) they will draw as much current as they can and either your LED or your source will go bang. We need to put a current limiting resistor in place and its value is given by

\( R=\frac{V_s-V_f}{I_{max}} \)

So worse case forward voltage is 1.85V and the max current is 12mA. This requires a resistor of 121 Ohms. So anything larger than this should be ok, the larger it is the dimmer the LED will be.

In this guide, we will explain how to start or run a Linux command or process in background and completely detach a process from its controlling terminal.