Hooray! EPSRC First Grant

There are few things that bring as much joy to an academic as receiving an approval email from EPSRC (on Monday morning!). My First Grant proposal Software Defined Cognitive Networking: Intelligent Resource Provisioning For Future Networks has been assessed through the EPSRC peer review process and has been recommended for funding. I am very pleased to see all four reviewers unanimously giving the best score available (6 out of 6), which are highly valued by the EPSRC ICT Prioritisation Panel April 2017. The 2-year project is set to start in August 2017 and to be joined by a Research Associate (starting in early 2018) and at least one PhD student (funded by the host institution).

EPSRC (Engineering and Physical Sciences Research Council) is the main UK government agency for funding research and training in engineering and the physical sciences – from mathematics to materials science, and from information technology to structural engineering. First Grant is a funding scheme set up by EPSRC to help “early career academics” establish their research leadership. In ICT area, First Grant usually sees a higher success rate compared with the regular Standard Grants and yet it’s nothing less than a tough hunger game. Every eligible person has only one shot in First Grant. You wouldn’t even think of writing the first letter of your proposal before establishing a strong research track record and evidence of networks. A proposal (including several mandatory sections) normally takes six-month to write and often re-write while you fulfil your standard teaching and admin duties. In the proposal, the PI must prove its expertise (and potentials) in his research area and also managerial skills in project management, finance, and impacts generation. Once submitted, the proposal will then go through a rigorous reviewing process where EPSRC invites comments from several field experts from academia and industry. The assessment criteria include Quality, Importance, Impact, Applicant, and Resources and management. A panel, organised a few times a year, will then collect all new proposals accompanied with their reviews and determine which ones to fund. Needless to say, I am very proud to see my work being recognised and awarded by a prestigious funding body.

I will publish more posts on my First Grant journey, project partners, and all the people who supported on the way. For now, back to exam paper marking!

 

A great experience at IFIP/IEEE IM 2017: 5G slicing, cognitive, E2E, blockchain…

The week trip to IFIP/IEEE International Symposium on Integrated Network Management (IM 2017) in Lisbon was fantastic. I had the chance to catch up with old friends and colleagues (Edmundo, Marilia, Alberto, etc.) and to meet other enthusiasts in network management, SDN, QoE, 5G, block chain and cognitive technologies.

I spent my first day with the QoE-Management workshop, which had one keynote led by seven presentations. There is a lot of work on measuring different aspects (delay, switching, fairness, buffer underrun) of the quality of adaptive streaming. Machine learning is also gaining its popularity in QoE management. In my opinion, the QoE communities face a few hurdles for a major leap ahead: human intent/perception, encrypted traffic, feasible machine learning solution in communication networks, and end-to-end multi-service management. I am glad to see that this community is very open to the challenges ahead. It is also quite interesting to see Tobias opening up the argument on Mean Opinion Score (MOS). MOS is essentially a method to gather and analyse user opinions in subjective experiments. MOS has been widely used in the QoE community for decades but it is mathematically flawed. I discussed this five years ago in a paper at IEEE CCNC: Statistical Analysis of Ordinal User Opinion Scores (Warning! It will upset you if you’ve done a lot of work using conventional MOS… If you ended up upset, seek doctor’s advice. Preferably a doctor in Mathematics.). Tactile Internet was mentioned a few times as one of the use cases. I think someone also mentioned NFV in user terminal with incentives? Why not…

The second day’s programme started with Raouf Boutaba (University of Waterloo)’s keynote on 5G network slicing. Raouf talked about virtual network embedding (VNE) with which we map virtual network nodes and links onto physical infrastructure. A good VNE would lead to better error tolerance, efficiency, and “collective wellbeing”, etc. It is surely linked to the cognitive networking that I am working on. Later on, a few papers from the industry dominated the experience track. Some highlights are Cisco’s model driven network analysis using a variation of RFC 7950 YANG (YANG is a data modelling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols.); UNIFY, a framework that brings cross-layer “elasticity” that unifies cloud and service networks; virtualization of radio access networks (for end-to-end management and other purposes); and IBM’s “BlueWall”, an orchestration of firewalls. BlueWall still keeps human-in-the-loop so it’s probably more of an Intelligence Augmentation system rather than Artificial Intelligence. The Panel on “Challenges and Issues for 5G E2E Slicing and its Orchestration” was packed with good talks on 5G. People were very optimistic of 5G open slicing, especially its potential in creating future generation mobile operators (“anyone can be an operator”) and the E2E benefits on VR and emergency use cases.

The third day was led by two inspiring keynotes: “Intent-Driven Networks” from Laurent Ciavaglia, Nokia and “The Future of Management is Cognitive” from Nikos Anerousis, IBM Research. They recognised that network/service management is moving towards “dark room + algorithms” (machine learning), but human will still have pivotal roles: referring/curating knowledge and training systems to solve complex problems. I then went to the security session and SDN session for the rest of the day. An Ericsson talk discussed COMPA (Control, Orchestration, Management, Policy, and Analytics) adaptive control loop as an automation pattern for carrier networks, a good work to follow if you do such high-level designs. There was an interesting paper on addressing the shortage of scarce and expensive TCAM memory on SDN switches using “memory swap”. The idea is to employ the memory of SDN controller for least frequently used flow rules to free up TCAM space. Is it impractical, naive? I think there are scenarios where this solution will actually work well…

David Gorman from IBM kicked started the fourth day with his excellent keynote talk on “Making Blockchain Real for Business”. David shared his vision on a world of shared ledger, smart contract, privacy (certificate) and trust. He used auditing as one of the use cases to demonstrate the uniqueness of blockchain in tracking transactions (changes) in comparison to conventional database solutions. His talk then converged on a brief introduction of Hyperledger, a community effort on cross-industry blockchain technologies. I had a short and interesting discussion with David on the impact and use cases of blockchain in higher education. Ultimately, blockchain is merely a technology and not a solution (in fact, the same applies to SDN). I think it can be a key technology to enable cross-service end-to-end management but in many cases, a solution is not dictated by the technology but politics and regulations.

On the last day, I only stayed till lunch time before I had to leave to catch my flight. The highlight of the day is certainly Alex Galis (UCL)’s talk on Programmability, Softwarization and Management in 5G networking. He emphasised on the importance and impact of softwarization and network programmability, especially the quality of slice in future networks. I’d summarise his talk, blending in my own views, as autonomous, adaptive, and automated end-to-end resource management. Alex also spent a few slides concluding on the key challenges on network slicing, which are very helpful to new researchers in this field.

All in all, IM 2017 at Portugal has been a wonderful event (In fact, they’ve done so well that they also won Eurovision 2017). I am looking forward to its future iterations (NOMS and IM).

A middleware that aims at helping TV broadcasters to create and deliver immersive experiences

Congratulations to my MSc student Hussein Ajam, who just had a paper accepted by ACM TVX Work-in-Progress (WiP) track. His work was inspired by a collaboration with Rajiv and Matt at the BBC R&D on prototyping a solution to 1) assist TV producers authoring immersive experience for TV programmes and 2) orchestrate multiple (IoT) user devices at home to convey the sense of immersion through synchronised media playback. Hussein’s work was also briefly supervised by Marie-Jose Montpetit, a renowned Research Scientist at MIT Media Lab, as part of ACM TVX’s Mentoring Programme. Since I am chairing the WiP track, Hussein’s submission was handled by the general chair for conflict-of-interest and fairness, and I am very pleased to see the positive result, especially in a track with an acceptance rate of just above 50% (I will write a chair’s summary of the 10 papers accepted). For Hussein, there is still a lot of work to do for his ambitious work plan and I am sure he will enjoy the conference in June.

wippaper

Ajam, H., and Mu, M., A Middleware to Enable Immersive Multi-Device Online TV Experience, to appear in 2017 ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017) Work-in-Progress track, Hilversum, The Netherlands, 06/2017

Abstract:

Recent years have witnessed the boom of great technologies of smart devices transforming the entertainment industry, especially the traditional TV viewing experiences. In an effort to improve user engagement, many TV broadcasters are now investigating future generation content production and presentation using emerging technologies. In this paper, we introduce an ongoing work to enable immersive and interactive multi-device online TV experiences. Our project incorporates three essential developments on content authoring, device discovery, and cross-device media orchestration.

Enabling Rapid Experimentation of Contextual Network Traffic Management using SDN

The non-cooperative and unsupervised resource competition between adaptive media applications (such as Youtube and Netflix) leads to significant detrimental quality fluctuations and an unbalanced share of network resources. Therefore, it is essential for content networks to better understand the application and user-level requirements of different data flows and to manage the traffic intelligently. I am glad to have been part of a team of talented researchers which was one of the first to experiment Software defined networking (SDN)-assisted QoE-aware network management using physical OpenFlow network switches. SDN is a network paradigm that decouples network control from the underlying packet forwarding. Combined with Fog Computing and Network Function Virtualization (NFV), this opens compute locations that are close to the edge to enable intelligent network traffic management services (I also name this cognitive networking).

Following the publications [1,2,3] we made, there have been numerous requests to open-source our experimentation environment (named REF – Rapid Experimentation Framework) from the research community.  REF is an experimentation framework and a guide to building a testbed that together provides a blueprint for an SDN-based contextual network design facility. Contrasting to existing facilities that typically provide very detailed low-level control to just the network infrastructure, our work provides higher level abstractions of both the network and virtualisation infrastructures through orchestration, automating the creation, connection, running, and cleaning of nodes in an experiment. REF also provides abstraction over the network for making the creation of context aware traffic management applications as streamline as possible. Additionally, with the unique configuration using slicing and port multiplexing, REF can create much larger physical networks with limited hardware than its competitors. Finally, the entire REF framework can be used and modified by anyone without any kind of registration or subscription to a federation.

Needless to say, to “open-source” a framework is not a straight-forward task. Our source codes are pretty much meaningless if they are not well connected with well-configured hardware equipment and a comprehensive guideline of do’s and don’ts. We wanted to publish this tutorial-style guideline in an elite outlet (so more people can benefit from it) while keeping the writing style suitable for SDN beginners, and there is nothing more suitable for our work than the IEEE Communications Magazine.  Furthermore, because we are using HPE’s network switches (3800 and later 3810 series) as reference equipment (and we know for sure that the implementation of standards (such as OpenFlow) by the vendors is a deterministic factor), we must work with HPE to make sure our analysis and conclusions are accurate. Fortunately, Bruno Hareng, an SDN and Security Solution Manager at HPE, provided invaluable input to our work.

arch-ref-new
Framework for rapid SDN experimentatin 

The manuscript is set to describe the framework (shown in Figure above), covering the requirements of the framework then the purpose of each component within the system as well as the abstractions that it provides to the user. Next, the experiment testbed is detailed, providing a guide on how to construct your own virtualisation and network infrastructure for experimentation. After this, both use cases are described and used to show REF in operation, this includes a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL. Finally, the article goes into a discussion on interesting findings that arose during the creation and use of the system. The manuscript is now accepted by IEEE Communications Magazine for publication in a July 2017 issue:

Fawcett, L., Mu, M., Hareng, B., and Race, N., “REF: Enabling Rapid Experimentation of Contextual Network Management using Software Defined Networking”, in IEEE Communications Magazine, 2017

Abstract:

Online video streaming is becoming a key consumer of future networks, generating high-throughput and highly dynamic traffic from large numbers of heterogeneous user devices. This places significant pressure on the underlying networks and can lead to a deterioration in performance, efficiency and fairness. To address this issue, future networks must incorporate contextual network designs that recognise application and user-level requirements. However, designs of new network traffic management components such as resource provisioning models are often tested within simulation environments which lack subtleties in how network equipment behaves in practice. This paper contributes the design and operational guidelines for a Software-Defined Networking (SDN) experimentation framework (REF), which enables rapid evaluation of contextual networking designs using real network infrastructures. Two use case studies of a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL demonstrate the effectiveness of REF in facilitating the design and validation of SDN-assisted networking.


References:

[1] Mu, M., Broadbent, M., Hart, N., Farshad, A., Race, N., Hutchison, D. and Ni, Q., “A Scalable User Fairness Model for Adaptive Video Streaming over SDN-Assisted Future Networks”, in IEEE Journal on Selected Areas in Communications. 34, 2168-2184, 2016. DOI: 10.1109/JSAC.2016.2577318

[2] Fawcett, L., Mu, M., Broadbent, M., Hart, N., and Race, N., SDQ: Enabling Rapid QoE Experimentation using Software Defined Networking, to appear in IFIP/IEEE International Symposium on Integrated Network Management (IEEE IM), Lisbon, Portugal, 05/2017

[3] Mu, M., Simpson. S., Farshad. A., Ni. Q., and Race. N., User-level Fairness Delivered: Network Resource Allocation for Adaptive Video Streaming (BEST PAPER AWARD) in Proceedings of 2015 IEEE/ACM International Symposium on Quality of Service (IWQoS), Portland, USA, 06/2015

Breaking the filter bubble

A collaboration with the data mining group at TU-Berlin and folks at Lancaster and Glasgow has seen a full paper accepted by ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017), Hilversum, The Netherlands, 06/2017. The acceptance rate is 31%, a competitive year for this conference series.

The paper describes our recent efforts in breaking the filter bubble, a term used to reflect the phenomenon that a recommendation algorithm guesstimates a user’s preference from limited contextual information (such as user clickstream data) and only provides the user with a very small selection of content based on the preference. A side-effect of such an approach is that it often ends up isolating a user from (a large amount) of content that the system does not believe would interest him or her. As a user selects from within the bubble, the bubble may also become smaller and more “specialised”, causing a negative cycle. We believe that the recommender should be smarter than it is and “talk” to its users as their friend. A friend who knows what you like and yet very often surprise you with new and cool things. We studied this contextual bias effect in an online IPTV system (to which I was a project lead for some years), and developed a novel approach to re-balance accuracy and diversity in live TV content recommendation using social media.

Yuan, J., Lorenz, F., Lommatzsch, A., Mu., M, Race, N., Hopfgartner, F., and Albayrak, S., Countering Contextual Bias in TV Watching Behavior: Introducing Social Trend as External Contextual Factor in TV Recommenders, to appear in 2017 ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017), Hilversum, The Netherlands, 06/2017

Abstract:
Context-awareness has become a critical factor in improving the predictions of user interest in modern online TV recommendation systems. In addition to individual user preferences, existing context-aware approaches such as tensor factorization incorporate system-level contextual bias to increase predicting accuracy. We analyzed a user interaction dataset from a WebTV platform, and identified that such contextual bias creates a skewed selection of recommended programs which ultimately leaves users in a filter bubble. To address this issue, we introduce a Twitter social stream as an external contextual factor to extend the choice with items related to social media events. We apply two trend indicators, Trend Momentum and SigniScore, to the Twitter histories of relevant programs.The evaluation reveals that Trend Momentum outperforms SigniScore and signalizes 96% of all peaks ahead of time regarding the selected candidate program titles.

Next Generation Internet: what’s next?

The EC’s NGI group recently published the final report of their open consultation for next generation Internet. The report identifies seven Technology Areas (TAs), which are believed to have pivotal roles in future Internet. We shouldn’t be surprised to see the TAs being bound by current FP7/H2020 or RCUK programmes as the respondents of the report wish to continue evolving their work in those programmes. The vast majority of the respondents come from research institutions, civil society, and SME while only 47 out of 449 are linked to industry. This is not to say that the conclusions from the report are far from realistic. Many initiatives, old (the Internet) or new (OpenFlow), stemmed from projects at research institutions.

I can see many connections between the seven TAs and my research in software-defined cognitive networking and immersive media. Having said that, is there any researcher in the area of computing and communications, whose work doesn’t cover multiple of these TAs? Is there any ICT research today that doesn’t consider data, network, and people as a whole? It seems that nearly the entire community envisages NGI as a super-intelligent, self-programming, and human-caring thing or things. There are, of course, brave ones who think differently. I vividly recall an ex-colleague of mine once saying that he contributes his success in networking research to “focusing on moving every single [network] package as fast as possible and nothing else”. Not many would think like that today…

TA 1 Discovery and identification tools

One of the premises of the Internet of Things is that devices around us will be partly physical and partly digital, with a vast majority of those devices being “headless”, lacking buttons, screens and other means by which the user interacts with the device. This premise forces us to figure out ways to discover, identify, and interact with the objects, devices and services in our lives in a seamless way, as well as ways to be made aware of the connected devices that surround us at any given moment.

TA 2 New forms of interactions and immersive environments

Increased computing, transmission power and next generation of devices (enabled by micro-nano-bio technology) allows conceptualizing new forms of interactions with machines and immersive environments that will have an impact in our professional and private life. New challenges are raising related to augmented and virtual reality, behaviour, human-computer interactions, haptics, human-human interactions through computers, machine-to-machine, spatial recognition and geographic information systems.

TA 3 Personal data spaces

Personal data is everything that identifies an individual, from a person’s name to telephone number, IP address, date of birth and photographs. The next generation Internet aims to develop technologies to help us achieve greater control of our personal data, knowing what is being shared and with whom.

TA 4 Distributed architectures and decentralized data governance

Distributed open hardware and software ecosystems are capable of supporting decentralised data management (so that each piece of user-generated information remains under the full control of the entity who generated it, and is subject to on-demand aggregation by third parties), leveraging on decentralised algorithms based on blockchains, distributed ledger technology (DLT) or peer-to-peer (P2P) technologies.

TA 5 Software-defined technologies

There is an evolution towards software-defined technologies. These may provide more functionalities and control for the allocation of resources, configuration and deployment, and may open new opportunities to develop the Internet.

TA 6 Networking solutions beyond IP

The current internet has certain limitations derived from its protocols that were developed in the 70’s, like the transmission control protocol/internet protocol (TCP/IP) and its limitations on mobility, IP address management and task limitation. Quality of Service (QoS) is another problem derived from TCP/IP, which is a problem generated by the inherent nature of networking technologies and the focus on pumping data from point A to point B as fast as possible without focusing on how the data is sent. The internet of the future should be able to overcome these limitations.

TA 7 Artificial Intelligence

Artificial intelligence will also change the Internet. Inspired by how the human brain works, mathematical models can learn discrete tasks by analysing enormous amounts of data. So far, machines have learnt to recognize faces in photos, understand spoken commands, and translate text from one language to another. But this is only the beginning. Artificial Intelligence will greatly sharpening the behaviour of any online services and be core technical enabler of the future Internet.

Overton, David., Next Generation Internet Initiative –  Consultation, https://ec.europa.eu/futurium/en/content/final-report-next-generation-internet-consultation-0, 2017

VR. ready, steady…

Transforming online learning experience using virtual reality and gamification

img_4152

Nothing cheers you up more than a new gadget in the middle of a term. Oculus Rift, controllers and earphones (Thank you, Nick!) The 60+mph wind gusts (Storm Doris) nearly took them off my arms in the car park, but I managed.

We are still expecting a Google Pixel+Daydream and a(n?) FOVE (with eye-tracking capabilities) to arrive.

View original post