Basil, A. et al., A Software Defined Network Based Research on Fairness in Multimedia, FAT/MM WS, 27th ACM International Conference on Multimedia (ACM MM 2019), France. 10/2019
The demand for online distribution of high quality and high throughput content has led to a non-cooperative competition of network resources between a growing number of media applications. This causes a significant impact on network efficiency, the quality of user experience (QoE) as well as a discrepancy of QoE across user devices. Within a multi-user multi-device environment, measuring and maintaining perceivable fairness becomes as critical as achieving the QoE on individual user applications. This paper discusses application- and human-level fairness over networked multimedia applications and how such fairness can be managed through novel network designs using programmable networks such as software-defined networks (SDN).
Our visited to the TVX 2019 has been a tremendous success. Murtada and Alison’s lightning talks were well received and we managed to have two demos in the BBC Quay House on the last day.
Alison’s VR painting demo had a great start then took an interesting turn and became a community art creation exercise. Audience with different background built on each other’s creations and the artwork just kept growing in multiple dimensions (no canvas to limit you and no one is afraid of making “digital mess”). This has really inspired us to look into collaborative VR art more closely.
Murtada’s gaze-controlled game has seen a lot of visitors who “always wanted to do something with eye-tracking in VR”. We are already working on the third version of the game. We have changed the strategy from “building a research tool that contains games elements” to “building a professional VR game with research tool integrated”. The game will also be part of a use case for our Intelligent Networks experiments.
Immediately after TVX, we also organised a workshop at Merged Futures event on our campus. Our audience are mainly SMEs and educators from Northants and nearby counties.
Most research in communication networks are quite fundamental such as sending data frame from point A to point B as quickly as possible with little loss on the way. Some networking research can also benefit communities indirectly. I recently started a new collaboration with our University IT department on a smart campus project where we use anonymised data sampled from a range of on-campus services for service improvement and automation with the help of information visualisation and data analytics. The first stage of the project is very much focused on the “intent-based” networking infrastructure by Cisco on Waterside campus. The SoTA system provides us with a central console and APIs to manage all network switches and 1000+ wireless APs. Systematically studying how user devices are connected to our APs can help us, in a non-intrusive fashion, better understand the way(s) our campus are used, and use that intelligence to improve our campus services. Although it’s possible to correlate data from various university information systems to infer ownership of devices connected to our wireless networks, my research does not make use of any data related user identity at this stage. Not only because it is unnecessary (we are only interested in how people use the campus as a whole), but also because how privacy and data protection rules are implemented. This is not to say that we’ll avoid any research on individual user behaviours. There are many use cases around timetabling, bus services, personal wellbeing and safety that will require volunteers to sign up to participate.
This part 1 blog shares the R&D architecture and some early prototypes of data visualisation before they evolve into something humongous.
A few samples of the charts we have:
So how were the charts made:
The source of our networking data is the Cisco controllers. The DNA centre offers secure APIs while the WLC has a well structured interface for data scraping. Either option worked for us so we have Python-based data sampling functions programmed for both interfaces. What we collect is a “snapshot” of all devices in our wireless networks and the details of the APs they are connected to. All device information such as MAC addresses can be hashed as long as we can differentiate one device from another (count unique devices) and associate a device across different samples. We think devices’ movements on campus as a continuous signal. The sampling process is essentially an ADC (analog to digital conversion) exercise similar to audio sampling. The Nyquist Theorem instructs us to take a minimum sampling frequency as least twice the highest frequency of the analog signal to faithfully capture the characteristics of the input. In practice, the signal frequency is determined by the density of wireless APs in an area and how fast people travel. In a seating area on our Learning Hub ground floor, I could easily pass a handful of APs during a minute long walk. Following the math and sampling from control centre every few seconds risks killing the data source (and unlikely but possibly entire campus network). As the first prototype, I compromised on a 1/min sampling rate. This may not affect our understanding of the movement between buildings that much (unless you run really fast between buildings) but we might need some sensible data interpolation for indoor movements (e.g., a device didn’t teleport from the third floor library to a fourth floor class room, it traveled via stairwell/lift).
The sampling outcome are stored as data snippets in the format of Python Pickle files (one file per sample). The files are then picked up asynchronously by a Python-based data filtering and DB insertion process to insert the data in a database for data analysis. Processed Pickle files are archived and hopefully never needed again. Separating the sampling and DB insertion makes things easier when you are prototyping (e.g., changing DB table structure or data type while sampling continues).
With the records in our DB growing at a rate of millions per day, some resource intensive pre-processing / aggregation (such as the number of unique devices per hour on each floor of a building) need to be done periodically to accelerate any following server-side functions for data visualisation, reducing the volume of data going to a web server by several orders of magnitude. This is at the cost of inserting additional entries in the database and risking creating “seams” between iterations of pre-processing but the benefit clearly outweighs the cost.
The conference adopted the theme of “Intelligent Management for the Next Wave of Cyber and Social Networks”. Besides the regular tracks, the five-day conference features some great tutorials, keynotes and panels. I have pages of notes and many contacts to follow up.
A few highlights are: Zero-touch network and service management (and how it’s actually “touch less” rather than touchless!), Huawei’s Big Packet Protocol (network management via packet header programming), DARPA’s Off-planet network management (fractionated architectures for satellites), Blockchain’s social, political, regulatory challenges (does not work with GDPR?) by UZH, Data science/ML for network management from Google and Orange Labs (with some python notebooks and a comprehensive survey paper of 500+ references.) and many more. I am hoping to write more about some of them in the future when I have a chance to study them further. There are certainly some good topics for student projects.
Since I am linked to both the multimedia/HCI and communication network communities, I have the opportunity to observe different approaches and challenges faced by these communities towards AI and ML. In multimedia communities, its relatively easy to acquire large and clean datasets, and there is a high level of tolerance when it comes to “trial and error”: 1) No one will get upset if a few from a hundred image search results are not accurate and 2) you can piggy-back some training module/reinforced learning on your services to improve the model. Furthermore, applications are often part of a closed proprietary environment (end to end control) and users are not that bothered with giving up their data. In networking, things are not far from “mission impossible”. 95% accuracy in packet forwarding will not get you very far, and there is not much infrastructure available to track any data, let alone making any data open for research. Even when there are tools to do so, you are likely to encounter encryption or information that is too deep to extract in practice. Also, tracking network data seems to attracts more controversy. We have a long and interesting way to go.
Washington, D.C. is surrounded by some amazing places to visit. George Washington’s riverside Mount Vernon is surely worth a trip. Not far from the Dulles airport is the Great Falls Park with spectacular waterfalls on Potomac river that separate Maryland and Virginia. Further west is the 100-mile scenic Skyline Drive and Appalachian Trail in Shenandoah National Park.
I have been a regular visitor of ACM TVX since it first became an ACM sponsored event in 2014 (previously known as EuroITV). This year, the conference will be held at MediaCityUK, Salford in early June. We’ll bring two pieces of early-stage research to Salford: understanding the user attention in VR using gaze-controlled games by Murtada Dohan (a newly started PhD candidate), and a demo of abstract painting in VR by a fine art artist Dr Alison Goodyear. You might have guessed that we have plans to bring these two together and experiment with new ways of content creation and audience engagement for both the arts and HCI communities.
Dohan, M. and Mu, M., Understanding User Attention In VR Using Gaze Controlled Games. Abstract: Understanding user’s intent has a pivotal role in developing immersive and personalised media applications. This paper introduces our recent research and user experiments towards interpreting user attention in virtual reality (VR). We designed a gaze-controlled Unity VR game for this study and implemented additional libraries to bridge raw eye-tracking data with game elements and mechanics. The experimental data show distinctive patterns of fixation spans which are paired with user interviews to help us explore characteristics of user attention.
Goodyear, A. and Mu, M., Abstract Painting Practice: Expanding in a Virtual World Abstract: This paper sets out to describe, through a demo for the TVX Conference, how virtual reality (VR) painting software is beginning to open up as a new medium for visual artists working in the field of abstract painting. The demo achieves this by describing how an artist who usually makes abstract paintings with paint and canvas in a studio, that is those existing as physical objects in the world, encounters and negotiates the process of making abstract paintings in VR using Tilt Brush software and Head-Mounted Displays (HMD). This paper also indicates potential future avenues for content creation in this emerging field and what this might mean not only for the artist and the viewer, but for art institutions trying to provide effective methods of delivery for innovative content in order to develop and grow new audiences.
I had the great pleasure of joining a Westminster Higher Education Forum event today as a speaker. My session was chaired by the Labour MP Mr Alex Sobel and the main theme of the session is around the opportunities and challenges in adopting new technologies in colleges and universities in the UK. The venue was packed with 100+ delegates from over 60 institutions and businesses across England. I spoke about our research findings on the use of VR in education and shared my views on how technologies can empower human educators in Education 4.0. The following are my notes. The official transcripts from all speakers will be available on Westminster website.
Virtual reality in its early days was mainly used for industrial simulation and military training in a controlled environment using specialised equipment. As technologies become more accessible, we started to see more use of VR in gaming and education. In education, VR is mostly used as a stimulus to enhance students engagement and learning experience. It helps visual learners, breaks barriers, and can visualise things that are hard to imagine. So we are mostly encapsulating on the indirect benefit.
My research group is interested in whether such stimuli can truly improve learning outcome and how, so we know how to improve the technology or use it more appropriately. We conducted an experiment with two groups of university students to compare how well they learn hard sciences using VR materials and Powerpoint slides. Their performance was measured using a short exam paired with interviews. The results suggest that the majority of students prefer learning in VR but there is no significant difference between the two on average scores. A recent research by Cornell University shows a similar finding. However, When we look at the breakdown of scores on individual questions we discovered that students who studied via VR can do very well with questions related to visual information recognition but they struggled to recall numerical and textual details such as the year and location of an event. We think its due to how information is presented and the extra cognitive load in VR. So VR made something better but others worse, it’s a double-edged sword.
This does not mean what VR is a waste of money. We need more work to learn how to better use the tool. This means two things: One, we need VR to be more accessible. Not only its cost but more importantly easy-to-use design tools and open libraries that help average lectures to embrace the technology. We also need appropriate metrics and measurement tools to access the actual impact of new technologies, and share that experience with the community.
Furthermore, we need to keep eyes on what roles VR should take in education. One thing we can learn from the past is Powerpoint in education. (Powerpoint was invented in 1980s, acquired by Microsoft, and went on to become one of the most commonly used tools in business and education. Powerpoint has drastically changed how teaching is done in classrooms. ) Powerpoint was meant to augment a human presenter but it has become a main delivery vehicle in classroom while lecturers are the operators or narrators of slides. People conclude that Powerpoint has not empowered academia. (Some institutes have banned teachers using Powerpoint. According to NYTimes, similar decisions were also made in the US Armed Forces because they regard it as a poor tool for decision-making.) Many institutes including University of Northampton are moving away from pure slideshow to active and blended learning and use data sciences and smart campus to support hands-on, experimental and interactive learning. So we can certainly learn from the past when we approach VR and other new technologies.
Another important aspect is the human factor. At the end of the day, only human educators are accountable to the teaching process. We listen to what learners say, observe their emotions, sympathise with their personal issues and I reason with them for every decision I made while trying to be as fair as possible. My team is work on many computer science research topics related to human factors such as interpretable machine learning, understanding human intent. However new technologies such as VR and AI should be designed and integrated to empower human educators rather than replacing us.
Like many academics, I regularly engage with the reviewing process of renowned conferences and journals. I have not ventured into any substantial editorial role yet but I try to help out as much as possible. Any of my journal review activities are registered on Publons to keep a record (mainly for myself). It is to my surprise that I received the Publon’s Peer Review Awards 2018 as the “Top 1% of reviewers in Computer Science”. I know many people will see this as a “gimmick”, but hey, our research communities rely heavily on quality peer reviews from volunteers. Also, an award is an award! 🙂