The new year started with a couple of interesting projects on AI.
In a HEIF-funded “Big Ideas” project, I am working with a brand placement company to prototype a solution that uses computer vision and deep learning to automate the evaluation of how brands and products appear in movies and TV shows. This is to hopefully assist if not replace the daunting manual work of a human evaluator. Measuring the impact of brand placement is a complex topic and it is underpinned by the capabilities of detecting the presence of products. Object detection (classification + localisation) is a well researched topic with many established deep learning frameworks available. Our early prototypes ,which use YOLOv3-based CNN (convolutional neural network) structures and trained on FlickrLogos-32 dataset have shown promising outcomes. There is a long list of TODOs linked to gamma correction, motion complexity, etc.
Our analysis of eye gaze and body motion data from a previous VR experiment continues. The main focus is on feature extraction, clustering and data visualisation. There are quite a few interesting observations made by a PhD researcher on how men and women behave differently in VR and how this could contribute to an improved measurement of user attention.
The research on human attention in VR is not limited by passive measurement and we already have some plans to experiment with creative art. We spent hours observing how young men and women interact with VR paintings which has inspired us to develop generative artworks that capture user experience of art encounters. Our first VR generative art demo will be hosted in Milton Keynes Gallery project space in Feb 2020 as part of Alison Goodyear’s Paint Park exhibition. My SDCN project has been supporting the research as part of its Connected VR use case.