What is productivity? A solid 8-hour of coding without interruption? Does it matter much if our project is on time or on budget, if we end up building something nobody wants? It would be a waste of human effort, investment, time and individual creativity, wouldn’t it? The Lean Startup by Eric Ries explores how we can avoid working efficiently on the wrong things by understanding what really matters to a product/project through iterations of (quick) Build-Measure-Learn loop.
I came across the concept of lean manufacturing/development a few years ago while doing project management for some EU/UK research projects, but I was very sceptical about it. Releasing any “minimum viable product (MVP)” was a bad idea to me not only because I wanted the user experience to be absolutely great from day one but also because I believed that building MVP is a waste of resources. If our plan is to build a car, why do we want to spend a few days glueing two push bikes together for an early version? Surely we can never reuse any technology or know-how of building and glueing bikes together for a car, right? Well, the key is whether we factor “vision” as part of project management. If we are absolutely sure about the vision (e.g., we know exactly what car we’ll build), then it’s a matter of system-level efficiency (get the programmers to work as hard as they could and make sure they are “in the zone” all day and every day). However, we often don’t know what our users want. In fact, the users are often not sure about what they want either. Therefore, learning what we should build should be an integral part of the exercise to run a project and the learning must be done using the right testing and measuring methods such as sandbox split test, actionable metrics, etc.
One of the main reasons that I picked up this book again and read it from cover to cover is that we witness an increasing number of students claiming that they followed the “Waterfall” model in their dissertations (Seriously? you did a one-man waterfall?!), despite that they surely have learned other models like RAD, V-model, Agile, etc. So I am going to do a trial and introduce the lean framework in my second-year Interaction Design course. Many of the lean principles already resonate with elements in that course so I hope it’s a good starting point (by “hope”, I mean build, test and learn. LOL).
Disclaimer: I am not sponsored by the author of the book nor any publisher/reseller to use it as part of my course.
I am fortunate to be involved in both communication networks and multimedia community. Following my visit to IEEE IM, I ventured to Hilversum, the Netherlands for ACM TVX, a flagship conference on interactive online TV and immersive experiences. I am a regular attendee of TVX and there are simply too many reasons for me not to miss this year’s iteration: 1. It’s at our doorstep. 45 minutes flight to Amsterdam (although my driver did pick me up 4 HOURS before the flight, because “You never know what will happen on M1 southbound to London airports at that time in the morning”…). 2. My MSc student Hussein presents his short paper. 3. Felix and Jing from TU-Berlin did a great job getting our full paper accepted. and 4. I look after the WiP track this year along with Elena and I am asked to chair the “Madness Session” in the conference programme.
We arrived at Hilversum, a small town ~20 miles east of the capital city, at lunch time. Hilversum is at the heart of Dutch multimedia research and industrial community and centre of media-related innovation in the Netherlands. It’s Media Park is home to Dutch public broadcaster NPO, as well as commercial broadcasters and audio-visual production companies. The decor at the railway station gives away the themes of the Hilversum Media Park.
Situated in Media Park, the Netherlands Institute for Sound and Vision (NISV) is the host of ACM TVX 2017. NISV collects, preserves and provides access to Dutch audio-visual heritage for media professionals, education, science and the general public. Its collection contains more than a million hours of television, radio, music, film and other media from the beginning in 1898 until today.
In the photo below, you can see the workshop where movie reels are digitised (upper floors) and stored in a data centre (lower floors).
The conference was packed with exciting keynotes, presentations, posters and demos. Felix did extremely well with his presentation in the main track, considering it’s his first conference as an MSc student at TU-Berlin.
My Madness session was also a success. It might have been one of the most challenging session to chair, as we need to fit 20 talks in a 30-minute slot. The aim is to provide a very quick overview of all poster and demo work, so people can be more selective when they attend poster/demo sessions (it’s like going through 20 movie trailers and decide which ones to watch). I have to say a big thank you to all presenters who all executed the 1-slide 1-minute rule beautifully! Our MSc student Hussein did a good job introducing his work on an IoT middleware to enable immersive TV experience.
I particularly liked the Social VR and multisensory demo from TNO and University of Sussex SCHI LAB. The Social VR work superimposes live audio-visual feed of other gamers in a VR game, hence fostering the social interactions between gamers for better gaming experience. I did give it a go and lost the game because my opponent kept talking and waving at me and distracted me from the game (That’s my excuse and I’ll stick to it…). The multisensory work shows how we can use a matrix of ultrasonic speakers as a contactless haptic tool to enhance the movie experience. Despite being at a very early stage, both demos showed a promising start of some great research with substantial impact. Different parts of the BBC R&D also brought quite a few exciting work including 360 VR subtitle (best WiP paper), CAKE (object-based media production), and Tellybox (9 demos of future TV), etc.
My main takeaway from TVX is that games design, especially interactive narratives, is becoming a key element in VR innovation. VR designers often complain about people not turning their heads or moving their bodies enough to appreciate the immersive environment. But how often do we look around curiously in the real world? I am sitting in an open-plan office and I won’t voluntarily check what’s behind or above me every few seconds unless there is something attracts my attention. So we can’t expect people to behave like a searchlight when they have a VR goggle on. There is a lot to learn from the games design field, and I am taking free BSc Games Design/Arts/Development courses from my colleagues.
There are few things that bring as much joy to an academic as receiving an approval email from EPSRC (on Monday morning!). My First Grant proposal Software Defined Cognitive Networking: Intelligent Resource Provisioning For Future Networks (EP/P033202/1) has been assessed through the EPSRC peer review process and has been recommended for funding. I am very pleased to see all four reviewers unanimously giving the best score available (6 out of 6), which are highly valued by the EPSRC ICT Prioritisation Panel April 2017 (ranked 3rd out of the 11 proposals). The 2-year project is set to start in August 2017 and to be joined by a Research Associate (starting in early 2018) and at least one PhD student (funded by the host institution). I am pleased to have Hewlett-Packard Enterprise Aruba and Lancaster University as the partners, who have been very supportive from the very beginning.
EPSRC (Engineering and Physical Sciences Research Council) is the main UK government agency for funding research and training in engineering and the physical sciences – from mathematics to materials science, and from information technology to structural engineering. First Grant is a funding scheme set up by EPSRC to help “early career academics” establish their research leadership. In ICT area, First Grant usually sees a higher success rate compared with the regular Standard Grants and yet it’s nothing less than a tough hunger game. Every eligible person has only one shot in First Grant. You wouldn’t even think of writing the first letter of your proposal before establishing a strong research track record and evidence of networks. A proposal (including several mandatory sections) normally takes six-month to write and often re-write while you fulfil your standard teaching and admin duties. In the proposal, the PI must prove its expertise (and potentials) in his research area and also managerial skills in project management, finance, and impacts generation. Once submitted, the proposal will then go through a rigorous reviewing process where EPSRC invites comments from several field experts from academia and industry. The assessment criteria include Quality, Importance, Impact, Applicant, and Resources and management. A panel, organised a few times a year, will then collect all new proposals accompanied with their reviews and determine which ones to fund. Needless to say, I am very proud to see my work being recognised and awarded by a prestigious funding body.
I will publish more posts on my First Grant journey, project partners, and all the people who supported on the way. For now, back to exam paper marking!
I spent my first day with the QoE-Management workshop, which had one keynote led by seven presentations. There is a lot of work on measuring different aspects (delay, switching, fairness, buffer underrun) of the quality of adaptive streaming. Machine learning is also gaining its popularity in QoE management. In my opinion, the QoE communities face a few hurdles for a major leap ahead: human intent/perception, encrypted traffic, feasible machine learning solution in communication networks, and end-to-end multi-service management. I am glad to see that this community is very open to the challenges ahead. It is also quite interesting to see Tobias opening up the argument on Mean Opinion Score (MOS). MOS is essentially a method to gather and analyse user opinions in subjective experiments. MOS has been widely used in the QoE community for decades but it is mathematically flawed. I discussed this five years ago in a paper at IEEE CCNC: Statistical Analysis of Ordinal User Opinion Scores (Warning! It will upset you if you’ve done a lot of work using conventional MOS… If you ended up upset, seek doctor’s advice. Preferably a doctor in Mathematics.). Tactile Internet was mentioned a few times as one of the use cases. I think someone also mentioned NFV in user terminal with incentives? Why not…
The second day’s programme started with Raouf Boutaba (University of Waterloo)’s keynote on 5G network slicing. Raouf talked about virtual network embedding (VNE) with which we map virtual network nodes and links onto physical infrastructure. A good VNE would lead to better error tolerance, efficiency, and “collective wellbeing”, etc. It is surely linked to the cognitive networking that I am working on. Later on, a few papers from the industry dominated the experience track. Some highlights are Cisco’s model driven network analysis using a variation of RFC 7950 YANG (YANG is a data modelling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols.); UNIFY, a framework that brings cross-layer “elasticity” that unifies cloud and service networks; virtualization of radio access networks (for end-to-end management and other purposes); and IBM’s “BlueWall”, an orchestration of firewalls. BlueWall still keeps human-in-the-loop so it’s probably more of an Intelligence Augmentation system rather than Artificial Intelligence. The Panel on “Challenges and Issues for 5G E2E Slicing and its Orchestration” was packed with good talks on 5G. People were very optimistic of 5G open slicing, especially its potential in creating future generation mobile operators (“anyone can be an operator”) and the E2E benefits on VR and emergency use cases.
The third day was led by two inspiring keynotes: “Intent-Driven Networks” from Laurent Ciavaglia, Nokia and “The Future of Management is Cognitive” from Nikos Anerousis, IBM Research. They recognised that network/service management is moving towards “dark room + algorithms” (machine learning), but human will still have pivotal roles: referring/curating knowledge and training systems to solve complex problems. I then went to the security session and SDN session for the rest of the day. An Ericsson talk discussed COMPA (Control, Orchestration, Management, Policy, and Analytics) adaptive control loop as an automation pattern for carrier networks, a good work to follow if you do such high-level designs. There was an interesting paper on addressing the shortage of scarce and expensive TCAM memory on SDN switches using “memory swap”. The idea is to employ the memory of SDN controller for least frequently used flow rules to free up TCAM space. Is it impractical, naive? I think there are scenarios where this solution will actually work well…
David Gorman from IBM kicked started the fourth day with his excellent keynote talk on “Making Blockchain Real for Business”. David shared his vision on a world of shared ledger, smart contract, privacy (certificate) and trust. He used auditing as one of the use cases to demonstrate the uniqueness of blockchain in tracking transactions (changes) in comparison to conventional database solutions. His talk then converged on a brief introduction of Hyperledger, a community effort on cross-industry blockchain technologies. I had a short and interesting discussion with David on the impact and use cases of blockchain in higher education. Ultimately, blockchain is merely a technology and not a solution (in fact, the same applies to SDN). I think it can be a key technology to enable cross-service end-to-end management but in many cases, a solution is not dictated by the technology but politics and regulations.
On the last day, I only stayed till lunch time before I had to leave to catch my flight. The highlight of the day is certainly Alex Galis (UCL)’s talk on Programmability, Softwarization and Management in 5G networking. He emphasised on the importance and impact of softwarization and network programmability, especially the quality of slice in future networks. I’d summarise his talk, blending in my own views, as autonomous, adaptive, and automated end-to-end resource management. Alex also spent a few slides concluding on the key challenges on network slicing, which are very helpful to new researchers in this field.
All in all, IM 2017 at Portugal has been a wonderful event (In fact, they’ve done so well that they also won Eurovision 2017). I am looking forward to its future iterations (NOMS and IM).
Congratulations to my MSc student Hussein Ajam, who just had a paper accepted by ACM TVX Work-in-Progress (WiP) track. His work was inspired by a collaboration with Rajiv and Matt at the BBC R&D on prototyping a solution to 1) assist TV producers authoring immersive experience for TV programmes and 2) orchestrate multiple (IoT) user devices at home to convey the sense of immersion through synchronised media playback. Hussein’s work was also briefly supervised by Marie-Jose Montpetit, a renowned Research Scientist at MIT Media Lab, as part of ACM TVX’s Mentoring Programme. Since I am chairing the WiP track, Hussein’s submission was handled by the general chair for conflict-of-interest and fairness, and I am very pleased to see the positive result, especially in a track with an acceptance rate of just above 50% (I will write a chair’s summary of the 10 papers accepted). For Hussein, there is still a lot of work to do for his ambitious work plan and I am sure he will enjoy the conference in June.
Ajam, H., and Mu, M., A Middleware to Enable Immersive Multi-Device Online TV Experience, to appear in 2017 ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017) Work-in-Progress track, Hilversum, The Netherlands, 06/2017
Recent years have witnessed the boom of great technologies of smart devices transforming the entertainment industry, especially the traditional TV viewing experiences. In an effort to improve user engagement, many TV broadcasters are now investigating future generation content production and presentation using emerging technologies. In this paper, we introduce an ongoing work to enable immersive and interactive multi-device online TV experiences. Our project incorporates three essential developments on content authoring, device discovery, and cross-device media orchestration.
The non-cooperative and unsupervised resource competition between adaptive media applications (such as Youtube and Netflix) leads to significant detrimental quality fluctuations and an unbalanced share of network resources. Therefore, it is essential for content networks to better understand the application and user-level requirements of different data flows and to manage the traffic intelligently. I am glad to have been part of a team of talented researchers which was one of the first to experiment Software defined networking (SDN)-assisted QoE-aware network management using physical OpenFlow network switches. SDN is a network paradigm that decouples network control from the underlying packet forwarding. Combined with Fog Computing and Network Function Virtualization (NFV), this opens compute locations that are close to the edge to enable intelligent network traffic management services (I also name this cognitive networking).
Following the publications [1,2,3] we made, there have been numerous requests to open-source our experimentation environment (named REF – Rapid Experimentation Framework) from the research community. REF is an experimentation framework and a guide to building a testbed that together provides a blueprint for an SDN-based contextual network design facility. Contrasting to existing facilities that typically provide very detailed low-level control to just the network infrastructure, our work provides higher level abstractions of both the network and virtualisation infrastructures through orchestration, automating the creation, connection, running, and cleaning of nodes in an experiment. REF also provides abstraction over the network for making the creation of context aware traffic management applications as streamline as possible. Additionally, with the unique configuration using slicing and port multiplexing, REF can create much larger physical networks with limited hardware than its competitors. Finally, the entire REF framework can be used and modified by anyone without any kind of registration or subscription to a federation.
Needless to say, to “open-source” a framework is not a straight-forward task. Our source codes are pretty much meaningless if they are not well connected with well-configured hardware equipment and a comprehensive guideline of do’s and don’ts. We wanted to publish this tutorial-style guideline in an elite outlet (so more people can benefit from it) while keeping the writing style suitable for SDN beginners, and there is nothing more suitable for our work than the IEEE Communications Magazine. Furthermore, because we are using HPE’s network switches (3800 and later 3810 series) as reference equipment (and we know for sure that the implementation of standards (such as OpenFlow) by the vendors is a deterministic factor), we must work with HPE to make sure our analysis and conclusions are accurate. Fortunately, Bruno Hareng, an SDN and Security Solution Manager at HPE, provided invaluable input to our work.
The manuscript is set to describe the framework (shown in Figure above), covering the requirements of the framework then the purpose of each component within the system as well as the abstractions that it provides to the user. Next, the experiment testbed is detailed, providing a guide on how to construct your own virtualisation and network infrastructure for experimentation. After this, both use cases are described and used to show REF in operation, this includes a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL. Finally, the article goes into a discussion on interesting findings that arose during the creation and use of the system. The manuscript is now accepted by IEEE Communications Magazine for publication in a July 2017 issue:
Fawcett, L., Mu, M., Hareng, B., and Race, N., “REF: Enabling Rapid Experimentation of Contextual Network Management using Software Defined Networking”, in IEEE Communications Magazine, 2017
Online video streaming is becoming a key consumer of future networks, generating high-throughput and highly dynamic traffic from large numbers of heterogeneous user devices. This places significant pressure on the underlying networks and can lead to a deterioration in performance, efficiency and fairness. To address this issue, future networks must incorporate contextual network designs that recognise application and user-level requirements. However, designs of new network traffic management components such as resource provisioning models are often tested within simulation environments which lack subtleties in how network equipment behaves in practice. This paper contributes the design and operational guidelines for a Software-Defined Networking (SDN) experimentation framework (REF), which enables rapid evaluation of contextual networking designs using real network infrastructures. Two use case studies of a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL demonstrate the effectiveness of REF in facilitating the design and validation of SDN-assisted networking.
 Mu, M., Broadbent, M., Hart, N., Farshad, A., Race, N., Hutchison, D. and Ni, Q., “A Scalable User Fairness Model for Adaptive Video Streaming over SDN-Assisted Future Networks”, in IEEE Journal on Selected Areas in Communications. 34, 2168-2184, 2016. DOI: 10.1109/JSAC.2016.2577318
 Fawcett, L., Mu, M., Broadbent, M., Hart, N., and Race, N., SDQ: Enabling Rapid QoE Experimentation using Software Defined Networking, to appear in IFIP/IEEE International Symposium on Integrated Network Management (IEEE IM), Lisbon, Portugal, 05/2017
 Mu, M., Simpson. S., Farshad. A., Ni. Q., and Race. N., User-level Fairness Delivered: Network Resource Allocation for Adaptive Video Streaming (BEST PAPER AWARD) in Proceedings of 2015 IEEE/ACM International Symposium on Quality of Service (IWQoS), Portland, USA, 06/2015
A collaboration with the data mining group at TU-Berlin and folks at Lancaster and Glasgow has seen a full paper accepted by ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017), Hilversum, The Netherlands, 06/2017. The acceptance rate is 31%, a competitive year for this conference series.
The paper describes our recent efforts in breaking the filter bubble, a term used to reflect the phenomenon that a recommendation algorithm guesstimates a user’s preference from limited contextual information (such as user clickstream data) and only provides the user with a very small selection of content based on the preference. A side-effect of such an approach is that it often ends up isolating a user from (a large amount) of content that the system does not believe would interest him or her. As a user selects from within the bubble, the bubble may also become smaller and more “specialised”, causing a negative cycle. We believe that the recommender should be smarter than it is and “talk” to its users as their friend. A friend who knows what you like and yet very often surprise you with new and cool things. We studied this contextual bias effect in an online IPTV system (to which I was a project lead for some years), and developed a novel approach to re-balance accuracy and diversity in live TV content recommendation using social media.
Yuan, J., Lorenz, F., Lommatzsch, A., Mu., M, Race, N., Hopfgartner, F., and Albayrak, S., Countering Contextual Bias in TV Watching Behavior: Introducing Social Trend as External Contextual Factor in TV Recommenders, to appear in 2017 ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017), Hilversum, The Netherlands, 06/2017
Context-awareness has become a critical factor in improving the predictions of user interest in modern online TV recommendation systems. In addition to individual user preferences, existing context-aware approaches such as tensor factorization incorporate system-level contextual bias to increase predicting accuracy. We analyzed a user interaction dataset from a WebTV platform, and identified that such contextual bias creates a skewed selection of recommended programs which ultimately leaves users in a filter bubble. To address this issue, we introduce a Twitter social stream as an external contextual factor to extend the choice with items related to social media events. We apply two trend indicators, Trend Momentum and SigniScore, to the Twitter histories of relevant programs.The evaluation reveals that Trend Momentum outperforms SigniScore and signalizes 96% of all peaks ahead of time regarding the selected candidate program titles.