A paper in submission for ACM SIGCOMM Workshop on Internet-QoE

ACM SIGCOMM Workshop on QoE-based Analysis and Management of Data Communication Networks (Internet-QoE 2016)

CFP: http://conferences.sigcomm.org/sigcomm/2016/files/workshops/cfp-qoe.pdf

Abstract (draft):
The emerging network paradigm of Software Defined Networking (SDN) has been increasingly adopted to improve the Quality of Experiences (QoE) across multiple HTTP adaptive streaming (HAS) instances. However, there is currently a gap between the research output and reality in this research field. QoE models, which offer user-level context to network management, are often tested in a simulation environment. Such environments do not consider the effects that network protocols, client programs, and other real world factors may have on the outcomes. On the other hand, setting up an experiment that reflects reality is a time consuming process requiring expert knowledge.
This paper shares designs and guidelines of a SDN experimentation framework (SDQ), which offers rapid evaluation of QoE models using real network infrastructures.

A paper is in submission for IEEE Journal of Selected Topics in Signal Processing (J-STSP)

A paper is in submission for IEEE Journal of Selected Topics in Signal Processing (J-STSP) Special Issue on Measuring Quality of Experience for Advanced Media Technologies and Services. This is a piece of work on cross-device media orchestration using web technologies and human factor modelling, which we started in an EU project (with TNO) and I am interested in steering it towards multi-sensory and multimedia IoT. Collaborations are welcome!

Link to CFP: http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/JSTSP_SI_measuring_quality.pdf

ACM TVX 2017 organising committee

I am in the organising committee of the ACM International Conference on Interactive Experiences for TV and Online Video (ACM TVX 2017). I’ll be taking the role of “Work in progress chairs” championing brave new ideas and findings in their early stages. The general chairs of ACM TVX 2017 are Omar Niamut (TNO, NL), Judith Redi (Delft University of Technology, NL), and Dick Bulterman (CWI, NL). The conference information for TVX 2017 will be published soon. To see the current iteration of TVX, visit https://www.id.iit.edu/tvx2016/ .

ACM TVX is the leading international conference for presentation and discussion of research into online video and TV interaction and user experience. The conference brings together international researchers and practitioners from a wide range of disciplines, ranging from human-computer interaction, multimedia engineering and design to media studies, media psychology and sociology.

Article summarises collective work in EC STEER project

  • STEER: Exploring the dynamic relationship between social information and networked media through experimentation
    (by Sylvie Dijkstra, Omar Niamut, Nikolaos Efthymiopoulos, Spyros Denazis, Nicholas Race, Mu Mu and Jacco Taal
    )

    Abstract:
    With the growing popularity of social networks, online video services and smart phones, the traditional content consumers are becoming the editors and broadcasters of their own stories. Within the EU FP7 project STEER, project partners have developed a novel system of new algorithms and toolsets that extract and analyse social informatics generated by social networks. Combined with advanced networking technologies, the platform creates services that offer more personalized and accurate content discovery and retrieval services. The STEER system has been deployed in multiple geographical locations during live social events such as the 2014 Winter Olympics. Our use case experiments demonstrate the feasibility and efficiency of the underlying technologies.

http://stcsn.ieee.net/e-letter/stcsn-e-letter-vol-3-no-2

Paper to appear in IEEE J-SAC

A paper with the title “A Scalable User Fairness Model for Adaptive Video Streaming over Future Internet” is under recommended revision for IEEE Journal on Selected Areas in Communications (J-SAC) Special Issue on “Video Distribution in the Future Internet” (Publication date: Second Quarter 2016). 

Abstract:

The growing demand for online distribution of high quality and high throughput content is dominating today’s Internet infrastructure. This includes both production and user-generated media. Among the myriad of media distribution mechanisms, HTTP adaptive streaming (HAS) is becoming a popular choice for multi-screen and multi-bitrate media services over heterogeneous networks. HAS applications often compete for network resources without any coordination between each other. This leads to quality-of-experience (QoE) fluctuations on delivered content, and unfairness between end users. Meanwhile, new network protocols, technologies and architectures, such as Software Defined Networking (SDN), are being developed for the future Internet. The programmability, flexibility and openness of these emerging developments can greatly assist the distribution of video over the Internet. This is driven by the increasing consumer demands and QoE requirements. This paper introduces a novel user-level fairness model UFair and its hierarchical variant UFairHA , which orchestrate HAS media streams using emerging network architectures and incorporate three fairness metrics (video quality, switching impact and cost efficiency) to achieve user-level fairness in video distribution. The UFairHA has also been implemented in a purpose-built SDN testbed using open technologies such as OpenFlow. Experimental results demonstrate the performance and feasibility of our design for video distribution over future Internet.

Keywords: Hierarchical resource allocation, adaptive media streaming, QoE utility fairness, network orchestration, software defined networking, human factor

TPC member of MediaSync @ ACM MMSys 2016

Next year, ACM Multimedia Systems (MMSys 2016) will include a special session on media synchronization (MediaSync). MediaSync is a longstanding annual event with special interests in the latest advances and remaining challenges on media synchronization to accommodate emerging forms of immersive, personalized and ultra-realistic media experiences in our multi-sensory, multi-protocol and multi-device world.

       
Special Session on “Media Synchronization”
May 10-13, Klagenfurt am Wörthersee (Austria)
Organizers
  • Pablo Cesar (Centrum Wiskunde & Informatica, CWI, Netherlands)
  • Fernando Boronat (Universitat Politècnica de València, UPV, Spain)
  • Mario Montagud (CWI & UPV)
  • Alexander Raake (Ilmenau University of Technology, Germany)
  • Zixia Huang (Google Fiber, USA)
Techical Program Committee (TPC)
  • I. Arntzen (Northern Reserarch Institute, NORUT, Norway)
  • M. Barkowsky (IRCCyN, University of Nantes, France)
  • S. Chen (George Mason University, USA)
  • C. Griwodz (Simula Research Laboratory, Norway)
  • J. C. Guerri (Universitat Politècnica de València, UPV, Spain)
  • C.-H. Hsu (National Tsing Hua University, Taiwan)
  • Y. Ishibashi (Nagoya Institute of Technology, Japan)
  • J. Jansen (Centrum Wiskunde & Informatica, CWI, Netherlands)
  • M. Moreno (IBM, PUC-RIO, Brazil)
  • Mu Mu (University of Northampton, UK)
  • N. Murray (Athlone Institute of Technology, Ireland)
  • K. Nahrstedt (University of Illinois at Urbana–Champaign, USA)
  • M. Obrist (University of Sussex, UK)
  • B. Rainer (Alpen-Adria-Universität Klagenfurt, Austria)
  • J. Skowronek (TU Ilmenau, Germany)
  • G. Schuller (TU Ilmenau, Germany)
  • R. Steinmetz (Technische Universität Darmstadt, Germany)
  • M. Vaalgamaa (Skype/Microsoft, Helsinky, Finland)
  • D. Van Deursen (Onsophic Inc., Hasselt, Belgium)
  • M. Wältermann (AVM, Germany)
  • V. Wendel (Technische Universität Darmstadt, Germany)
  • W. Wu (Ricoh Innovations, California, USA)
  • R. Zimmerman (National University of Singapore, Singapore)
Important Dates:
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Submission Instructions:
 
Please, visit the MMSYS website and check the Call for Papers.
Scope & Goals:
Media synchronization has been a key research area since the early development of (distributed) multimedia systems. Over the years, solutions to achieve intra- and inter-media synchronization in a variety of (mostly audiovisual) applications and scenarios have been proposed. However, it is not by far a solved research problem, as the latest advances in multimedia systems bring new challenges. The coexistence and integration of novel data types (e.g., multi-sensorial media or mulsemedia), advanced encoding techniques, multiple delivery technologies, together with the rise of heterogeneous and ubiquitous connected devices, are resulting in a complex media ecosystem for which evolved, or even new radical, synchronization solutions need to be devised.
This Special Session addresses exactly that: latest advances and remaining challenges on media synchronization to accommodate emerging forms of immersive, personalized and ultra-realistic media experiences in our multi-sensory, multi-protocol and multi-device world. The purpose is to provide a forum for researchers to share and discuss recent contributions in this field and to pave the way to the future, by focusing on different aspects of multimedia systems, such as: content types, (multi-)processing techniques, networking issues, adaptive delivery and presentation, and human perception (QoE). This special session is the continuation of the MediaSync Workshop series (2012, 2013 and 2015) and of Special Sessions in other venues (QoMEX 2014).
Topics of Interest 
  • Novel architectures, protocols, algorithms and techniques.
  • Mulsemedia (multi-sensory media).
  • Theoretical frameworks & reference models.
  • Evaluation methodologies and metrics.
  • Standardization efforts.
  • Proprietary solutions (e.g., watermarking, fingerprinting…).
  • Technological frameworks & tools & testbeds.
  • Emerging media consumption patterns.
  • Content-aware & context-aware solutions.
Use Cases & Scenarios of Interest
  • (Multi-party) Conferencing.
  • Shared media experiences (e.g., Social TV).
  • Hybrid broadband broadcast services.
  • Multi-Screen applications.
  • Networked games.
  • Virtual Environments.
  • Telepresence, 3D Tele Immersion (3DTI).
  • Multi-sensory experiences (Olfactory, Haptics).
  • Distributed arts or music performances.
  • Synchronous e-learning.
  • Immersive audio environments.
  • Multi-level or multi-quality media.
  • Seamless session migration and convergence across devices.
  • Collaborative/Cooperative multi-processing and multi-rendering of media.

For more information, please visit: https://sites.google.com/site/mediasynchronization/mmsys2016

Paper accepted by IEEE ISM 2015

A paper entitled “Improving Interactive TV Experience Using Second Screen Mobile Applications” is to appear in IEEE International Symposium on Multimedia (IEEE ISM), Miami, Florida | December 14-16. 


Abstract

The past two decades have seen a shift in the multimedia consumption behaviours from that of collectivism and passivity, to individualism and activity. This paper introduces the architectural design, implementation and user evaluation of a second screen application, which is designed to supersede the traditional user control interface for primary screen interaction. We describe how NSMobile, our second screen application, can be used as a pervasive multimedia platform by integrating user experiences on both the second screen and primary screen. The quantitative and qualitative evaluation of user interactions with interactive TV content also contributes to the future design of second screen applications.

Paper accepted by IEEE CCNC 2016

A full paper entitled “QoE-aware Inter-stream Synchronization in Open N-Screens Cloud” has been accepted by the the QoE and Human-Centered Communications and Application track of IEEE Consumer Communications & Networking Conference (CCNC), Las Vegas, January 9-12, 2016. The conference is held in conjunction with the International Consumer Electronics Show (CES).

I am also a PC member of the Cloud Services and Networking track of the same conference.


Paper abstract:

The growing popularity and increasing performance of mobile devices is transforming the way in which media can be consumed, from single device playback to orchestrated multi-stream experiences across multiple devices. One of the biggest challenges in realizing such immersive media experience is the dynamic management of synchronicity between associated media streams. This is further complicated by the faceted aspects of user perception and heterogeneity of user devices and networks. This paper introduces a QoE-aware open inter-stream media synchronization framework (IMSync). IMSync employs efficient monitoring and control mechanisms, as well as a bespoke QoE impact model derived from subjective user experiments. The impact model balances the accumulative impact of re-synchronization processes and the degree of non-synchronicity to ensure the QoE. Experimental results verify the run-time performance of the framework as a foundation for immersive media experience in open N-Screens cloud.

Network resilience with anomaly detection in the cloud

Since July 2015, I’ve been involved in Lancaster University’s activities in the EC FP7 SECCRIT (SEcure Cloud computing for CRitical infrastructure IT) project, a multidisciplinary research project with the mission to analyse and evaluate cloud computing technologies with respect to security risks in sensitive environments, and to develop methodologies, technologies, and best practices for creating a secure, trustworthy, and high assurance cloud computing environment for critical infrastructure IT.

resilience network

We specifically look into network resilience with anomaly detection in the cloud, using the well-known D2R2+DR (Defence, Detect, Remediate, Recover, Diagnose and Refine) principle. The first phase of D2R2 begins with defence, making the network as resistant as possible to challenges. Inevitably however, a network will be threatened and it must be able to detect this automatically. It will then remediate any damage to minimize the overall impact, and finally will recover as it repairs itself and transitions back to normal operation. The second longer-term phase DR consists of diagnosing any design flaws that permitted the defences to be penetrated, followed by a refinement of network behaviour to increase its future resilience. From this strategy, we derive a set of design principles leading to resilient networks.