Research on the fairness of networked multimedia to appear in FAT/MM WS at ACM Multimedia 2019

A job well done for a first-year PhD student.

SDCN: Software Defined Cognitive Networking

AwesomeScreenshot-www-acmmm-org-2019--2019-08-12_11_44.png

Basil, A. et al., A Software Defined Network Based Research on Fairness in Multimedia, FAT/MM WS, 27th ACM International Conference on Multimedia (ACM MM 2019), France. 10/2019

The demand for online distribution of high quality and high throughput content has led to a non-cooperative competition of network resources between a growing number of media applications. This causes a significant impact on network efficiency, the quality of user experience (QoE) as well as a discrepancy of QoE across user devices. Within a multi-user multi-device environment, measuring and maintaining perceivable fairness becomes as critical as achieving the QoE on individual user applications. This paper discusses application- and human-level fairness over networked multimedia applications and how such fairness can be managed through novel network designs using programmable networks such as software-defined networks (SDN).

Screenshot 2019-08-12 at 11.45.55.png

View original post

“Disruptive” VR art? A quick update

Lovely sunset view from Lowry

Our visited to the TVX 2019 has been a tremendous success. Murtada and Alison’s lightning talks were well received and we managed to have two demos in the BBC Quay House on the last day.

Alison’s VR painting demo had a great start then took an interesting turn and became a community art creation exercise. Audience with different background built on each other’s creations and the artwork just kept growing in multiple dimensions (no canvas to limit you and no one is afraid of making “digital mess”). This has really inspired us to look into collaborative VR art more closely.

Alison’s VR Painting demo (trust me, i tried tidying the desk)

Murtada’s gaze-controlled game has seen a lot of visitors who “always wanted to do something with eye-tracking in VR”. We are already working on the third version of the game. We have changed the strategy from “building a research tool that contains games elements” to “building a professional VR game with research tool integrated”. The game will also be part of a use case for our Intelligent Networks experiments.

Murtada’s gaze-control game demo

Immediately after TVX, we also organised a workshop at Merged Futures event on our campus. Our audience are mainly SMEs and educators from Northants and nearby counties.

VR arts and education workshop at Merged Futures 2019, UON

Slides from the workshop:

Smart Campus project – part 1

Most research in communication networks are quite fundamental such as sending data frame from point A to point B as quickly as possible with little loss on the way. Some networking research can also benefit communities indirectly. I recently started a new collaboration with our University IT department on a smart campus project where we use anonymised data sampled from a range of on-campus services for service improvement and automation with the help of information visualisation and data analytics. The first stage of the project is very much focused on the “intent-based” networking infrastructure by Cisco on Waterside campus. The SoTA system provides us with a central console and APIs to manage all network switches and 1000+ wireless APs. Systematically studying how user devices are connected to our APs can help us, in a non-intrusive fashion, better understand the way(s) our campus are used, and use that intelligence to improve our campus services. Although it’s possible to correlate data from various university information systems to infer ownership of devices connected to our wireless networks, my research does not make use of any data related user identity at this stage. Not only because it is unnecessary (we are only interested in how people use the campus as a whole), but also because how privacy and data protection rules are implemented. This is not to say that we’ll avoid any research on individual user behaviours. There are many use cases around timetabling, bus services, personal wellbeing and safety that will require volunteers to sign up to participate.

This part 1 blog shares the R&D architecture and some early prototypes of data visualisation before they evolve into something humongous.

A few samples of the charts we have:

Wireless connected devices in an academic building with breakdowns on each floor. There is a clear weekly and daily pattern. We are able to tell which floors are over or under-used and improve our energy efficiency / help students or staff finding free space to work. [image not for redistribution]
“Anomaly” due to fire alarm test (hundreds of devices leaving the building in minutes). We can examine how people leave the building from different areas of the building and identify any bottleneck. [image not for redistribution]
Connected devices on campus throughout a typical off-term day with breakdowns in different areas (buildings, zones, etc.). [image not for redistribution]
Heatmap of device connected in an academic building in off-term weeks. The heat strips are grouped in weekdays except an Open Day Saturday [image not for redistribution]
Device movements between buildings/areas. It helps us to understand the complex dependencies between parts of our infrastructure and how we can improve the user experience. [image not for redistribution]
How connected devices are distributed across campus in the past 7 days and the top 5 areas on each floor of academic buildings. [image not for redistribution]

So how were the charts made:

The source of our networking data is the Cisco controllers. The DNA centre offers secure APIs while the WLC has a well structured interface for data scraping. Either option worked for us so we have Python-based data sampling functions programmed for both interfaces. What we collect is a “snapshot” of all devices in our wireless networks and the details of the APs they are connected to. All device information such as MAC addresses can be hashed as long as we can differentiate one device from another (count unique devices) and associate a device across different samples. We think devices’ movements on campus as a continuous signal. The sampling process is essentially an ADC (analog to digital conversion) exercise similar to audio sampling. The Nyquist Theorem instructs us to take a minimum sampling frequency as least twice the highest frequency of the analog signal to faithfully capture the characteristics of the input. In practice, the signal frequency is determined by the density of wireless APs in an area and how fast people travel. In a seating area on our Learning Hub ground floor, I could easily pass a handful of APs during a minute long walk. Following the math and sampling from control centre every few seconds risks killing the data source (and unlikely but possibly entire campus network). As the first prototype, I compromised on a 1/min sampling rate. This may not affect our understanding of the movement between buildings that much (unless you run really fast between buildings) but we might need some sensible data interpolation for indoor movements (e.g., a device didn’t teleport from the third floor library to a fourth floor class room, it traveled via stairwell/lift).

Architecture (greyed out elements will be discussed in future blogs)

The sampling outcome are stored as data snippets in the format of Python Pickle files (one file per sample). The files are then picked up asynchronously by a Python-based data filtering and DB insertion process to insert the data in a database for data analysis. Processed Pickle files are archived and hopefully never needed again. Separating the sampling and DB insertion makes things easier when you are prototyping (e.g., changing DB table structure or data type while sampling continues).

Data growth [image not for redistribution]

With the records in our DB growing at a rate of millions per day, some resource intensive pre-processing / aggregation (such as the number of unique devices per hour on each floor of a building) need to be done periodically to accelerate any following server-side functions for data visualisation, reducing the volume of data going to a web server by several orders of magnitude. This is at the cost of inserting additional entries in the database and risking creating “seams” between iterations of pre-processing but the benefit clearly outweighs the cost.

The visualisation process is split into two parts: the plot (chart) and the data feed. There are many choices for professional-looking static information plotting such as Matplotlib and ggplot2 (see how the BBC Visual and Data Journalism team works with graphics in R). Knowing that we’ll present the figures in interactive workshops, I made a start with web-based dynamic charts that “bring data to life” and allow us to illustrate layers of information while encouraging exploring. Frameworks that support such tasks include D3.js and Highcharts (a list of 14 can be found here). Between the two, D3 gives you more freedom to customise your chart but you’ll need to be a SVG guru (and a degree of artistic excellence) to master it. Meanwhile, Highcharts provides many sample charts for you to begin with and the data feed is easy to programme. It’s an ideal tool for prototyping and only some basic knowledge of Javascript is needed. To feed structured data to Highcharts, we pair each chart page with a PHP worker for data aggregation and formatting. The workflow is as follows:
1) The client-side webpage loads all elements including the Highcharts framework and the HTML elements that accommodates the chart.
2) A JQuery function waits for the page load to complete and initiates a Highcharts instance with the data feed left open (empty).
3) The same function then calls a separate Javascript function that performs an AJAX call to the corresponding PHP worker.
4) The PHP worker runs server-side code, fetches data from MySQL, and performs any data aggregation and formatting necessary before returning the JSON-encoded results back to the front-end Javascript function.
5) Upon receiving the results, the Javascript function conducts lightweight data demultiplexing for more complex chart types and set the data attribute of the Highcharts instance with the new data feed.
For certain charts, we also provided some extra user input fields to help dealing with user queries (e.g., plot data from a particular day).

Data science and network management at IEEE IM 2019, Washington, D.C.

IEEE IM 2019 – Washington DC, USA (link to papers)
Following IM 2017 in the picturesque Lisbon, one of the most beautiful cities in Europe, this year’s event was held in the US capital city during its peak cherry blossom season.

The conference adopted the theme of “Intelligent Management for the Next Wave of Cyber and Social Networks”. Besides the regular tracks, the five-day conference features some great tutorials, keynotes and panels. I have pages of notes and many contacts to follow up.

A few highlights are: Zero-touch network and service management (and how it’s actually “touch less” rather than touchless!), Huawei’s Big Packet Protocol (network management via packet header programming), DARPA’s Off-planet network management (fractionated architectures for satellites), Blockchain’s social, political, regulatory challenges (does not work with GDPR?) by UZH, Data science/ML for network management from Google and Orange Labs (with some python notebooks and a comprehensive survey paper of 500+ references.) and many more. I am hoping to write more about some of them in the future when I have a chance to study them further. There are certainly some good topics for student projects.

Since I am linked to both the multimedia/HCI and communication network communities, I have the opportunity to observe different approaches and challenges faced by these communities towards AI and ML. In multimedia communities, its relatively easy to acquire large and clean datasets, and there is a high level of tolerance when it comes to “trial and error”: 1) No one will get upset if a few from a hundred image search results are not accurate and 2) you can piggy-back some training module/reinforced learning on your services to improve the model. Furthermore, applications are often part of a closed proprietary environment (end to end control) and users are not that bothered with giving up their data. In networking, things are not far from “mission impossible”. 95% accuracy in packet forwarding will not get you very far, and there is not much infrastructure available to track any data, let alone making any data open for research. Even when there are tools to do so, you are likely to encounter encryption or information that is too deep to extract in practice. Also, tracking network data seems to attracts more controversy. We have a long and interesting way to go.

Washington, D.C. is surrounded by some amazing places to visit. George Washington’s riverside Mount Vernon is surely worth a trip. Not far from the Dulles airport is the Great Falls Park with spectacular waterfalls on Potomac river that separate Maryland and Virginia. Further west is the 100-mile scenic Skyline Drive and Appalachian Trail in Shenandoah National Park.

We are taking VR and art research to ACM TVX 2019

I have been a regular visitor of ACM TVX since it first became an ACM sponsored event in 2014 (previously known as EuroITV). This year, the conference will be held at MediaCityUK, Salford in early June. We’ll bring two pieces of early-stage research to Salford: understanding the user attention in VR using gaze-controlled games by Murtada Dohan (a newly started PhD candidate), and a demo of abstract painting in VR by a fine art artist Dr Alison Goodyear. You might have guessed that we have plans to bring these two together and experiment with new ways of content creation and audience engagement for both the arts and HCI communities.

Links to:


Dohan, M. and Mu, M., Understanding User Attention In VR Using Gaze Controlled Games.
Abstract: Understanding user’s intent has a pivotal role in developing immersive and personalised media applications. This paper introduces our recent research and user experiments towards interpreting user attention in virtual reality (VR). We designed a gaze-controlled Unity VR game for this study and implemented additional libraries to bridge raw eye-tracking data with game elements and mechanics. The experimental data show distinctive patterns of fixation spans which are paired with user interviews to help us explore characteristics of user attention.

Goodyear, A. and Mu, M., Abstract Painting Practice: Expanding in a Virtual World
Abstract: This paper sets out to describe, through a demo for the TVX Conference, how virtual reality (VR) painting software is beginning to open up as a new medium for visual artists working in the field of abstract painting. The demo achieves this by describing how an artist who usually makes abstract paintings with paint and canvas in a studio, that is those existing as physical objects in the world, encounters and negotiates the process of making abstract paintings in VR using Tilt Brush software and Head-Mounted Displays (HMD). This paper also indicates potential future avenues for content creation in this emerging field and what this might mean not only for the artist and the viewer, but for art institutions trying to provide effective methods of delivery for innovative content in order to develop and grow new audiences.

Copyright belongs to Dr Alison Goodyear

Speak at Westminster HE Forum – Technologies in higher education

I had the great pleasure of joining a Westminster Higher Education Forum event today as a speaker. My session was chaired by the Labour MP Mr Alex Sobel and the main theme of the session is around the opportunities and challenges in adopting new technologies in colleges and universities in the UK. The venue was packed with 100+ delegates from over 60 institutions and businesses across England. I spoke about our research findings on the use of VR in education and shared my views on how technologies can empower human educators in Education 4.0. The following are my notes. The official transcripts from all speakers will be available on Westminster website.


Virtual reality in its early days was mainly used for industrial simulation and military training in a controlled environment using specialised equipment. As technologies become more accessible, we started to see more use of VR in gaming and education. In education, VR is mostly used as a stimulus to enhance students engagement and learning experience. It helps visual learners, breaks barriers, and can visualise things that are hard to imagine. So we are mostly encapsulating on the indirect benefit.

My research group is interested in whether such stimuli can truly improve learning outcome and how, so we know how to improve the technology or use it more appropriately. We conducted an experiment with two groups of university students to compare how well they learn hard sciences using VR materials and Powerpoint slides. Their performance was measured using a short exam paired with interviews. The results suggest that the majority of students prefer learning in VR but there is no significant difference between the two on average scores. A recent research by Cornell University shows a similar finding. However, When we look at the breakdown of scores on individual questions we discovered that students who studied via VR can do very well with questions related to visual information recognition but they struggled to recall numerical and textual details such as the year and location of an event. We think its due to how information is presented and the extra cognitive load in VR. So VR made something better but others worse, it’s a double-edged sword.

This does not mean what VR is a waste of money. We need more work to learn how to better use the tool. This means two things: One, we need VR to be more accessible. Not only its cost but more importantly easy-to-use design tools and open libraries that help average lectures to embrace the technology. We also need appropriate metrics and measurement tools to access the actual impact of new technologies, and share that experience with the community.

Furthermore, we need to keep eyes on what roles VR should take in education. One thing we can learn from the past is Powerpoint in education. (Powerpoint was invented in 1980s, acquired by Microsoft, and went on to become one of the most commonly used tools in business and education. Powerpoint has drastically changed how teaching is done in classrooms. ) Powerpoint was meant to augment a human presenter but it has become a main delivery vehicle in classroom while lecturers are the operators or narrators of slides. People conclude that Powerpoint has not empowered academia. (Some institutes have banned teachers using Powerpoint. According to NYTimes, similar decisions were also made in the US Armed Forces because they regard it as a poor tool for decision-making.) Many institutes including University of Northampton are moving away from pure slideshow to active and blended learning and use data sciences and smart campus to support hands-on, experimental and interactive learning. So we can certainly learn from the past when we approach VR and other new technologies.

Another important aspect is the human factor. At the end of the day, only human educators are accountable to the teaching process. We listen to what learners say, observe their emotions, sympathise with their personal issues and I reason with them for every decision I made while trying to be as fair as possible. My team is work on many computer science research topics related to human factors such as interpretable machine learning, understanding human intent. However new technologies such as VR and AI should be designed and integrated to empower human educators rather than replacing us.


Top 1% of reviewers in Computer Science?

https://publons.com/awards/2018/esi/?name=Mu%20Mu&esi=13

Like many academics, I regularly engage with the reviewing process of renowned conferences and journals. I have not ventured into any substantial editorial role yet but I try to help out as much as possible. Any of my journal review activities are registered on Publons to keep a record (mainly for myself). It is to my surprise that I received the Publon’s Peer Review Awards 2018 as the “Top 1% of reviewers in Computer Science”. I know many people will see this as a “gimmick”, but hey, our research communities rely heavily on quality peer reviews from volunteers. Also, an award is an award! 🙂