“A robust architecture search for sward height estimation from UAV-derived Digital Surface Models”: https://www.sciencedirect.com/science/article/pii/S0168169925008956
A New Paper on Low Light Enhancement
C. Liu, Z. Wang, P. Birch and X. Wang, “Efficient Retinex-Based Framework for Low-Light Image Enhancement without Additional Networks,” in IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109/TCSVT.2024.3520802.
New Paper on augmentation in human activity recognition
Oishi, Nobuyuki, Phil Birch, Daniel Roggen, and Paula Lago. 2025. “WIMUSim: Simulating Realistic Variabilities in Wearable IMUs for Human Activity Recognition.” Frontiers of Computer Science 7 (January): 1514933.
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1514933/full
EPSRC studentships at Sussex
Sussex has a number of PhD studentships up for grabs. The application deadline is 14th Feb 2025. See here: https://www.sussex.ac.uk/study/fees-funding/phd-funding/view/1818-EPSRC-Science-and-Engineering-studentships
I’ve got one advertised under: “Applications of Human Activity Recognition (HAR) (EP25/15)” Please get in contact if you’re interested.
#phdfunding #phd
Rayethon anti drone laser weapon demonstration
New Sussex AI PhD scholarships
www.sussex.ac.uk/study/fees-funding/phd-funding/view/1807-Sussex-AI-PhD-studentships
Contact me if you’d like to discuss.
ETTrack: enhanced temporal motion predictor for multi-object tracking
Han, X., Oishi, N., Tian, Y. et al. ETTrack: enhanced temporal motion predictor for multi-object tracking. Appl Intell 55, 33 (2025). https://doi.org/10.1007/s10489-024-05866-4
Our new paper on tracking multiple people using a transformer to learn complex motion patterns.
Funded PhD
Displaying Opencv images in PyQT
The code for this tutorial is here
Opencv provides are useful, but limited, method of building a GUI. A much more complete system could be acheived using pyqt.
The question is, how do we display images. There are quite a few possible routes but perhaps the easiest is to use QLabel
since it has a setPixmap
function. Below is some code that creates two labels. It then creates a grey pixmap and displays it one of the labels. code: staticLabel1.py
from PyQt5.QtWidgets import QWidget, QApplication, QLabel, QVBoxLayout
from PyQt5.QtGui import QPixmap, QColor
import sys
class App(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Qt static label demo")
width = 640
height = 480
# create the label that holds the image
self.image_label = QLabel(self)
# create a text label
self.textLabel = QLabel('Demo')
# create a vertical box layout and add the two labels
vbox = QVBoxLayout()
vbox.addWidget(self.image_label)
vbox.addWidget(self.textLabel)
# set the vbox layout as the widgets layout
self.setLayout(vbox)
# create a grey pixmap
grey = QPixmap(width, height)
grey.fill(QColor('darkGray'))
# set the image image to the grey pixmap
self.image_label.setPixmap(grey)
if __name__ == "__main__":
app = QApplication(sys.argv)
a = App()
a.show()
sys.exit(app.exec_())
It should look like this:
Ok, so how do with display an image. Well we could us Qt to load the image directly, but I want to do this with opencv so it can be integrated into a computer vision app. To do this load the image with cv2.imread
and then we convert this to a QPixmap
and rescale. Here is an example
def convert_cv_qt(self, cv_img):
"""Convert from an opencv image to QPixmap"""
rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
h, w, ch = rgb_image.shape
bytes_per_line = ch * w
convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888)
p = convert_to_Qt_format.scaled(self.disply_width, self.display_height, Qt.KeepAspectRatio)
return QPixmap.fromImage(p)
This is part of a class that defines self.disply_width
and self.display_height
. The full code is here (staticLabel2.ppy). The result should look like
So can we now use this to display video? We could open the webcam and update our image every frame. liveLabel1.py attempts to do this - but it does not work. The problem is the video capture loop is blocking the processing. This means the message system that Qt uses to do things like draw the widgets never gets called. We need to fix this by capturing the webcam in a seperate thread. Here's the code:
from PyQt5 import QtGui
from PyQt5.QtWidgets import QWidget, QApplication, QLabel, QVBoxLayout
from PyQt5.QtGui import QPixmap
import sys
import cv2
from PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QThread
import numpy as np
class VideoThread(QThread):
change_pixmap_signal = pyqtSignal(np.ndarray)
def run(self):
# capture from web cam
cap = cv2.VideoCapture(0)
while True:
ret, cv_img = cap.read()
if ret:
self.change_pixmap_signal.emit(cv_img)
class App(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Qt live label demo")
self.disply_width = 640
self.display_height = 480
# create the label that holds the image
self.image_label = QLabel(self)
self.image_label.resize(self.disply_width, self.display_height)
# create a text label
self.textLabel = QLabel('Webcam')
# create a vertical box layout and add the two labels
vbox = QVBoxLayout()
vbox.addWidget(self.image_label)
vbox.addWidget(self.textLabel)
# set the vbox layout as the widgets layout
self.setLayout(vbox)
# create the video capture thread
self.thread = VideoThread()
# connect its signal to the update_image slot
self.thread.change_pixmap_signal.connect(self.update_image)
# start the thread
self.thread.start()
@pyqtSlot(np.ndarray)
def update_image(self, cv_img):
"""Updates the image_label with a new opencv image"""
qt_img = self.convert_cv_qt(cv_img)
self.image_label.setPixmap(qt_img)
def convert_cv_qt(self, cv_img):
"""Convert from an opencv image to QPixmap"""
rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
h, w, ch = rgb_image.shape
bytes_per_line = ch * w
convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888)
p = convert_to_Qt_format.scaled(self.disply_width, self.display_height, Qt.KeepAspectRatio)
return QPixmap.fromImage(p)
if __name__=="__main__":
app = QApplication(sys.argv)
a = App()
a.show()
sys.exit(app.exec_())
Note, we send a numpy array using PyQt's signal slot mechanism to ensure thread saftey. This should now display live images for your webcam. There might, however, be a error created when you close the app. This is because the capture device has not been shutdown correctly. (On my mac this does not happen, but on my Linux machine it does). To solve this we need to override App
's closeEvent
function and single to the thread to shut down.
from PyQt5 import QtGui
from PyQt5.QtWidgets import QWidget, QApplication, QLabel, QVBoxLayout
from PyQt5.QtGui import QPixmap
import sys
import cv2
from PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QThread
import numpy as np
class VideoThread(QThread):
change_pixmap_signal = pyqtSignal(np.ndarray)
def __init__(self):
super().__init__()
self._run_flag = True
def run(self):
# capture from web cam
cap = cv2.VideoCapture(0)
while self._run_flag:
ret, cv_img = cap.read()
if ret:
self.change_pixmap_signal.emit(cv_img)
# shut down capture system
cap.release()
def stop(self):
"""Sets run flag to False and waits for thread to finish"""
self._run_flag = False
self.wait()
class App(QWidget):
def __init__(self):
super().__init__()
self.setWindowTitle("Qt live label demo")
self.disply_width = 640
self.display_height = 480
# create the label that holds the image
self.image_label = QLabel(self)
self.image_label.resize(self.disply_width, self.display_height)
# create a text label
self.textLabel = QLabel('Webcam')
# create a vertical box layout and add the two labels
vbox = QVBoxLayout()
vbox.addWidget(self.image_label)
vbox.addWidget(self.textLabel)
# set the vbox layout as the widgets layout
self.setLayout(vbox)
# create the video capture thread
self.thread = VideoThread()
# connect its signal to the update_image slot
self.thread.change_pixmap_signal.connect(self.update_image)
# start the thread
self.thread.start()
def closeEvent(self, event):
self.thread.stop()
event.accept()
@pyqtSlot(np.ndarray)
def update_image(self, cv_img):
"""Updates the image_label with a new opencv image"""
qt_img = self.convert_cv_qt(cv_img)
self.image_label.setPixmap(qt_img)
def convert_cv_qt(self, cv_img):
"""Convert from an opencv image to QPixmap"""
rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
h, w, ch = rgb_image.shape
bytes_per_line = ch * w
convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888)
p = convert_to_Qt_format.scaled(self.disply_width, self.display_height, Qt.KeepAspectRatio)
return QPixmap.fromImage(p)
if __name__=="__main__":
app = QApplication(sys.argv)
a = App()
a.show()
sys.exit(app.exec_())
This Person Does Not Exist
The power of deep learning GAN networks is phenomenal. This website generates a new image of a person on every page refresh. These faces are completely artificial and look almost flawless.
Source: This Person Does Not Exist