Leveling up (slowly) on game design

Inspired by some interesting discussions with the game team at work, I recently picked up Scott Rogers’ Level Up! The Guide to Great Video Game Design in an attempt to learn how to design an engaging game. There are a lot of fascinating conventions, rule, and forms of thoughts in game design we can borrow in pedagogy, psychology, and immersive media (virtual/augmented/mixed reality) research. The book has 18 levels (chapters) and 11 bonus levels, and I am leveling up very slowly…

To better engage with the theories in the book, I also had to conduct 120+ hrs of “research” in D3 Rise of Necromancer. The book also repeatedly refers to BioShock, so I’ll have to get back to that at some point.

51evlmwzwml-_sx394_bo1204203200_

Below are my progressively updated notes. They probably don’t make much sense to anyone who has not read the book.

Level 1: N00bs

  • What is a game: an activity that 1) requires at least one player, 2) has rules, and 3) has a win and/or lose condition.
  • A game needs a clear objective so the player knows what the goal is. As a designer, you should be able to sum up a game’s objectives quickly and clearly.
  • Games have two types of genres: story genre ( the type of story such as fantasy, sports) and game genre (the type of gameplay such as action, puzzle, shooter).
  • Games are made by people with different skills: programmer, artist, designer, producer, tester, writer, product manager, creative manager, marketing, etc.

Level 2: Ideas

  • Think what gamers want (good games). Make players feel something that they aren’t in the real world (powerful, smart, sneaky, bad, etc.).
  • What’s the age of my audience? Kids always want what is made for an audience older than their own age group. Don’t make mistake by oversimplifying and talking down to younger audiences. Kids today are far smarter and way better gamers than we give them credit for.
  • Despite some academic definition of fun (e.g., Marc LeBlanc’s sensation, fellowship, fantasy, discovery, narrative, expression, challenge, and submission), fun is completely subjective.
  • You have no guarantee that your game idea is going to be fun. Theory of Un-fun: Remove all the un-fun, all that should be left is the fun. Don’t be afraid to kill bad ideas. If un-fun is ruining your game, kill the un-fun.
  • Ideas are cheap; it’s how you use them that matters.

Level 3: Writing the story

  • The most basic structure of a story:
    • There is a hero with a desire (rescue a princess)
    • The hero encounters an event that interferes with obtaining the desired, causing a problem.
    • The hero tries to overcome the problem but his method fails
    • Reversal of fortune. The fail causes more trouble.
    • An even greater problem and one last problem (boss).
    • The hero must resolve the final problem, and gain the object of desire.
  • Every time someone plays a game, she creates a narrative. As a designer, you need to look at all the narratives possible and make them ALL fun.
  • Designers help players to create the narrative. As each experience builds on the next, the goal is to create rising emotional states for the player. (Left 4 Dead uses AI that monitors players’ stress level – calculated using variables including health, skill, and location – then adjusts the items, enemies and the music.)
  • Players’ narratives can end up quite different from the game’s story.
  • The Triangle of Weirdness.  Choose ONE from characters, activities, and world (to be weird). Choose more than one will risk alienating your audience.
  • Make the story be in service of the gameplay and not the other way around (example: BioShock have non-mandatory collectible audiotapes that reveal deeper storying without intruding on the main story). Keep your stories lively and moving. Change in the plot or action every 15 minutes.
  • Theme (e.g., love conquers all, eat or be eaten) can be more important than a story to a game.
  • Determine how long the game should be.
  • A game by Any Other Name – Still, choosing a name is the most important thing.
  • Create characters your players care about. Give players time to bond with the characters. Make death matter.
  • Game for kids: teach your players things without their even knowing it.
  • License.

Level 4: Paperwork

  • Making a game design document (GDD) – communication to the player, your team members, and to your publishing partners. The book provides templates in the bonus levels.
  • Step 1: the One-Sheet – keep it interesting, informative, and short. Unique selling point – get readers excited about the features of a game without going into lengthy detail about them.
  • Step 2: the Ten-Pager – people who finance your game will read this. Plenty of relevant visuals. Can be in PowerPoint. Tailor for audience type (the production team or marketing/executives). Includes the Title page, Game outline, Character, Gameplay, Game world, Game experience, Gameplay Mechanics, Enemies, Multiplayer/Bonus, and Monetization.
  • Step 3: Gameplay progression – several different ways to start a game (start with nothing, several skills but needs to unlock, skills but no knowledge of how to use them, power but lose it after a fight, etc.).
  • Step 4: The Beat Chart – describe level by level, as a “map” of the structure of the game.
  • Step 5: GDD – Game designs are living things. GDD provides that launching pad from which to soar. Documents connect the producer, the designer, the artist, and the programmer.
  • Step 6: Stay open-minded to ideas.

Level 5: Three Cs 1/3 – Character

  • The Three CsCharacter, Camera, and Control – are probably the most important elements of gameplay.
  • For the character design, an important rule is “Form follows function“. We want players to easily understand the personality of characters by their appearance (square->strong/dumb, circle->friendly, downward-pointing triangle->powerful or sinister depends on whether its body or face).
  • Anything you can do to let the players customize their character furthers their feeling of ownership (including physique: eat too much junk food and get fat).
  • Realistic or stylized design?
  • Using all the parts to communicate information to the players (movement, appearance, inventory, weapons, etc.).
  • Use a second character (playable or companion) and make players care about them. Use opposites attract where characters complement and contrast each other. Let the relationship develops early in the game.
  • Differentiate characters so that each one has a weakness and a strength. Rock, Paper, Scissor (RPS) design.
  • Use Non-player characters (NPCs) and how they are needed for the players to succeed (tools, access, gear, backstory, compliments,etc.). Given NPCs something to do when idle, which helps to enhance the environment. Have NPCs physically distinct in dress and body language. NPCs may even start a challenge with the player, mimicking a multi-player experience.
  • Characters are determined by their metrics including height, speed, jump distance, attack distance, etc. Keep the metrics consistent.
  • Walking is not gameplay. Avoid player traveling for long stretches sightseeing. Add moves, events, etc. Westerners are accustomed to reading things left to right, so have your characters walking to the left makes people feel “ill at ease”.
  • Alternating fast and slow gameplay is interesting.
  • Give your characters some short animation while they are not moving (idle). It’s the art of doing nothing.
  • Jump is a complex movement. Think the mechanics of jumping. This reflects the decision on physics: real-world vs game physics. A certain fidelity to real life is necessary to sell real-world physics but break it to make the game fun.
  • The book then elaborates on jumping and falling. The key messages are: 1) make the rules clear and consistent, 2) let players recover quickly (and continue with play).
  • Shadow design varies. A simple drop shadow lets players understand where they’ll land.

Level 6: Three Cs 2/3 – Camera

  • Nothing will cause players to stop playing your game faster than a poor camera
  • Choosing the right camera view impacts the game design, control, and artwork.
    • Static camera – fixed position, focal distance or field of view
    • Scrolling camera – add advantages of a movement (better engagement) and reveal hidden stuff in a dramatic way. Parallax scrolling (world moves with camera), Forced scroll (players forced to keep up with camera movement or die), Mode 7, multiplane camera (multiple layers of objects moving independently,  used a lot by Disney. aka 2.5D)
  • First person camera – watch DIMS (doom-induced motion sickness), influenced by the field of view. Remedies: high framerate, late stationary reference object, avoid whipping the camera too much, etc. No view, thus less bond with the characters.
  • Third person camera – clear view of the character (‘s backside). Challenges on camera design:
    • Camera movement:  treat camera as a person, give it room to maneuver
    • Sorting: move through a geometry with a collision. 1) detection radius to avoid collision or 2) turn object translucent, etc.
    • Link to controls
    • Camera flipping: camera bounces between objects.
    • Obstruction: something gets between the camera and the player
    • Position: camera strictly follows the player or laid back and follow freely?
  • Who has the control of camera?
    • Player control – 1) Allow players complete control over the following camera: players can get quickly disoriented, miss events, DIMS. 2) Free-look camera: simulating character’s head. limited range. Put cameras on hydraulics (or elastics). In certain context, simulate a piece of equipment such as binoculars or transition to first-person view.
    • No player control – less to worry about, designers focus on visible polygons or textures, no risk players missing key elements, must consider and match players movement, let players get out of it easily if camera is blocked.
    • Probably a hybrid of the previous two.
  • Isometric camera – God view with no perspective adjustments. map 3d objects to 2d surfaces. In many cases, iso views are actually bi-metric view since only two axis pairs share the same angles. This is to accommodate 2:1 pixel for less pixelation. Elevation can be an issue for iso.
  • Top-Down camera – sometimes top-down/side view.
  • AR cameras – make sure things are scalable. try not to clutter up HUD elements (to avoid DIMS).
  • Special case cameras – underwater or flying.
  • Tunnel vision – player moving through tight environments like caves. use a rail camera to maintain the feeling of claustrophobia.
  • Hollywood cheatsheet:
    • Camera Shot guide – From extreme wide shot to Point-of-View shot and over-the-shoulder shot.
    • Camera Angle Guide – Eye level, Worm’s-eye, Dutch tilt, etc.
    • Camera Movement Guide – Arc, Dolly zoom, Pedestal, etc.
  • Always point the camera to the objective. Even if players can’t see the objective, provide tools such as “detective mode” to pinpoints objectives, show an arrow/tag, turn obstruction transparent, etc.
  • Multiplayer cameras.

Level 7: Three Cs 3/3 – Control

  • Controls are universally applicable to every style of game.
  • Away remember that humans are playing these games and we should avoid making the control itself too challenging (whilst the gameplay can still be).
  • Assigning controls to different fingers based on what they are (not) good at.
  • When designing for young players, keep the controls (button presses, etc.) simple.
  • Do not go nuts with the ubercomprex controls.
  • There are a lot of different controls on touchscreens. Use as few as possible to minimise confusing.
  • A good designer will think about how the game is played in the real-world as well as in the game world. Button smashing -> Claw hands -> Occupational overuse syndrome.
  • Never have a button do nothing when pressed:
    • Play a “negative response”
    • Make it clear that a button is inactive, then make a big deal when it is unlocked.
    • Assign a redundant but related function (to engage players’ memory of that button)
  • Map the moves to logical control locations helps immerse the player into the game world.
  • Resembles that of other successful games.
  • As the button is pressed, the action should happen (more or less promptly). Use long animation carefully and with a purpose.
  • Most game controls are character-relative. Camera-relative ones don’t make much sense…
  • Actuators and gyroscopes in a controller. Silent Hill used two actuators at different frequencies to simulate a heartbeat. Creepy…  Tilt controls getting popular in mobile games. As the gyroscope is hidden within the mechanism of the controller, remind players that this function is available.
  • Camera-based motion controllers – broad and mimic reality.

Level 8: Sign language: HUD and Icon design

Level 9:

Level 10:

Level 11:

Level 12:

Level 13:

Level 14:

Level 15:

Level 16:

Level 17:

Level 18:

Advertisements

Waste Not – a Lean approach

devopsguys_reading_list_the_lean_startup

What is productivity? A solid 8-hour of coding without interruption? Does it matter much if our project is on time or on budget, if we end up building something nobody wants? It would be a waste of human effort, investment, time and individual creativity, wouldn’t it? The Lean Startup by Eric Ries explores how we can avoid working efficiently on the wrong things by understanding what really matters to a product/project through iterations of (quick) Build-Measure-Learn loop.

I came across the concept of lean manufacturing/development a few years ago while doing project management for some EU/UK research projects, but I was very sceptical about it. Releasing any “minimum viable product (MVP)” was a bad idea to me not only because I wanted the user experience to be absolutely great from day one but also because I believed that building MVP is a waste of resources. If our plan is to build a car, why do we want to spend a few days glueing two push bikes together for an early version? Surely we can never reuse any technology or know-how of building and glueing bikes together for a car, right? Well, the key is whether we factor “vision” as part of project management. If we are absolutely sure about the vision (e.g., we know exactly what car we’ll build), then it’s a matter of system-level efficiency (get the programmers to work as hard as they could and make sure they are “in the zone” all day and every day). However, we often don’t know what our users want. In fact, the users are often not sure about what they want either. Therefore, learning what we should build should be an integral part of the exercise to run a project and the learning must be done using the right testing and measuring methods such as sandbox split test, actionable metrics, etc.

One of the main reasons that I picked up this book again and read it from cover to cover is that we witness an increasing number of students claiming that they followed the “Waterfall” model in their dissertations (Seriously? you did a one-man waterfall?!), despite that they surely have learned other models like RAD, V-model, Agile, etc. So I am going to do a trial and introduce the lean framework in my second-year Interaction Design course. Many of the lean principles already resonate with elements in that course so I hope it’s a good starting point (by “hope”, I mean build, test and learn. LOL).

Disclaimer: I am not sponsored by the author of the book nor any publisher/reseller to use it as part of my course.

Alternate realities at ACM TVX 2017

IMG_4907.JPG

I am fortunate to be involved in both communication networks and multimedia community. Following my visit to IEEE IM, I ventured to Hilversum, the Netherlands for ACM TVX, a flagship conference on interactive online TV and immersive experiences. I am a regular attendee of TVX and there are simply too many reasons for me not to miss this year’s iteration: 1. It’s at our doorstep. 45 minutes flight to Amsterdam (although my driver did pick me up 4 HOURS before the flight, because “You never know what will happen on M1 southbound to London airports at that time in the morning”…). 2. My MSc student Hussein presents his short paper. 3. Felix and Jing from TU-Berlin did a great job getting our full paper accepted. and 4. I look after the WiP track this year along with Elena and I am asked to chair the “Madness Session” in the conference programme.

We arrived at Hilversum, a small town ~20 miles east of the capital city, at lunch time. Hilversum is at the heart of Dutch multimedia research and industrial community and centre of media-related innovation in the Netherlands. It’s Media Park is home to Dutch public broadcaster NPO, as well as commercial broadcasters and audio-visual production companies. The decor at the railway station gives away the themes of the Hilversum Media Park.

img_4870.jpg

Situated in Media Park, the Netherlands Institute for Sound and Vision (NISV) is the host of ACM TVX 2017. NISV collects, preserves and provides access to Dutch audio-visual heritage for media professionals, education, science and the general public. Its collection contains more than a million hours of television, radio, music, film and other media from the beginning in 1898 until today.

IMG_4875   IMG_4877

In the photo below, you can see the workshop where movie reels are digitised (upper floors) and stored in a data centre (lower floors).

img_4873.jpg

The conference was packed with exciting keynotes, presentations, posters and demos. Felix did extremely well with his presentation in the main track, considering it’s his first conference as an MSc student at TU-Berlin.

IMG_4893
Felix’s presentation

My Madness session was also a success. It might have been one of the most challenging session to chair, as we need to fit 20 talks in a 30-minute slot. The aim is to provide a very quick overview of all poster and demo work, so people can be more selective when they attend poster/demo sessions (it’s like going through 20 movie trailers and decide which ones to watch). I have to say a big thank you to all presenters who all executed the 1-slide 1-minute rule beautifully! Our MSc student Hussein did a good job introducing his work on an IoT middleware to enable immersive TV experience.

IMG_4896
Hussein delivers a lightning talk in the madness session with other presenters standing by.

I particularly liked the Social VR and multisensory demo from TNO and University of Sussex SCHI LAB. The Social VR work superimposes live audio-visual feed of other gamers in a VR game, hence fostering the social interactions between gamers for better gaming experience. I did give it a go and lost the game because my opponent kept talking and waving at me and distracted me from the game (That’s my excuse and I’ll stick to it…).  The multisensory work shows how we can use a matrix of ultrasonic speakers as a contactless haptic tool to enhance the movie experience. Despite being at a very early stage, both demos showed a promising start of some great research with substantial impact. Different parts of the BBC R&D also brought quite a few exciting work including 360 VR subtitle (best WiP paper), CAKE (object-based media production), and Tellybox (9 demos of future TV), etc.

19222696_1309775925787520_9018217827207781475_o
TVX Demos https://www.facebook.com/acmtvx/

My main takeaway from TVX is that games design, especially interactive narratives, is becoming a key element in VR innovation. VR designers often complain about people not turning their heads or moving their bodies enough to appreciate the immersive environment. But how often do we look around curiously in the real world? I am sitting in an open-plan office and I won’t voluntarily check what’s behind or above me every few seconds unless there is something attracts my attention. So we can’t expect people to behave like a searchlight when they have a VR goggle on. There is a lot to learn from the games design field, and I am taking free BSc Games Design/Arts/Development courses from my colleagues.

EPSRC First Grant Success

There are few things that bring as much joy to an academic as receiving an approval email from EPSRC (on Monday morning!). My First Grant proposal Software Defined Cognitive Networking: Intelligent Resource Provisioning For Future Networks (EP/P033202/1) has been assessed through the EPSRC peer review process and has been recommended for funding. I am very pleased to see all four reviewers unanimously giving the best score available (6 out of 6), which are highly valued by the EPSRC ICT Prioritisation Panel April 2017 (ranked 3rd out of the 11 proposals). The 2-year project is set to start in August 2017 and to be joined by a Research Associate (starting in early 2018) and at least one PhD student (funded by the host institution). I am pleased to have Hewlett-Packard Enterprise Aruba and Lancaster University as the partners, who have been very supportive from the very beginning.

EPSRC (Engineering and Physical Sciences Research Council) is the main UK government agency for funding research and training in engineering and the physical sciences – from mathematics to materials science, and from information technology to structural engineering. First Grant is a funding scheme set up by EPSRC to help “early career academics” establish their research leadership. In ICT area, First Grant usually sees a higher success rate compared with the regular Standard Grants and yet it’s nothing less than a tough hunger game. Every eligible person has only one shot in First Grant. You wouldn’t even think of writing the first letter of your proposal before establishing a strong research track record and evidence of networks. A proposal (including several mandatory sections) normally takes six-month to write and often re-write while you fulfil your standard teaching and admin duties. In the proposal, the PI must prove its expertise (and potentials) in his research area and also managerial skills in project management, finance, and impacts generation. Once submitted, the proposal will then go through a rigorous reviewing process where EPSRC invites comments from several field experts from academia and industry. The assessment criteria include Quality, Importance, Impact, Applicant, and Resources and management. A panel, organised a few times a year, will then collect all new proposals accompanied with their reviews and determine which ones to fund. Needless to say, I am very proud to see my work being recognised and awarded by a prestigious funding body.

I will publish more posts on my First Grant journey, project partners, and all the people who supported on the way. For now, back to exam paper marking!

 

A great experience at IFIP/IEEE IM 2017: 5G slicing, cognitive, E2E, blockchain…

The week trip to IFIP/IEEE International Symposium on Integrated Network Management (IM 2017) in Lisbon was fantastic. I had the chance to catch up with old friends and colleagues (Edmundo, Marilia, Alberto, etc.) and to meet other enthusiasts in network management, SDN, QoE, 5G, block chain and cognitive technologies.

I spent my first day with the QoE-Management workshop, which had one keynote led by seven presentations. There is a lot of work on measuring different aspects (delay, switching, fairness, buffer underrun) of the quality of adaptive streaming. Machine learning is also gaining its popularity in QoE management. In my opinion, the QoE communities face a few hurdles for a major leap ahead: human intent/perception, encrypted traffic, feasible machine learning solution in communication networks, and end-to-end multi-service management. I am glad to see that this community is very open to the challenges ahead. It is also quite interesting to see Tobias opening up the argument on Mean Opinion Score (MOS). MOS is essentially a method to gather and analyse user opinions in subjective experiments. MOS has been widely used in the QoE community for decades but it is mathematically flawed. I discussed this five years ago in a paper at IEEE CCNC: Statistical Analysis of Ordinal User Opinion Scores (Warning! It will upset you if you’ve done a lot of work using conventional MOS… If you ended up upset, seek doctor’s advice. Preferably a doctor in Mathematics.). Tactile Internet was mentioned a few times as one of the use cases. I think someone also mentioned NFV in user terminal with incentives? Why not…

The second day’s programme started with Raouf Boutaba (University of Waterloo)’s keynote on 5G network slicing. Raouf talked about virtual network embedding (VNE) with which we map virtual network nodes and links onto physical infrastructure. A good VNE would lead to better error tolerance, efficiency, and “collective wellbeing”, etc. It is surely linked to the cognitive networking that I am working on. Later on, a few papers from the industry dominated the experience track. Some highlights are Cisco’s model driven network analysis using a variation of RFC 7950 YANG (YANG is a data modelling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols.); UNIFY, a framework that brings cross-layer “elasticity” that unifies cloud and service networks; virtualization of radio access networks (for end-to-end management and other purposes); and IBM’s “BlueWall”, an orchestration of firewalls. BlueWall still keeps human-in-the-loop so it’s probably more of an Intelligence Augmentation system rather than Artificial Intelligence. The Panel on “Challenges and Issues for 5G E2E Slicing and its Orchestration” was packed with good talks on 5G. People were very optimistic of 5G open slicing, especially its potential in creating future generation mobile operators (“anyone can be an operator”) and the E2E benefits on VR and emergency use cases.

The third day was led by two inspiring keynotes: “Intent-Driven Networks” from Laurent Ciavaglia, Nokia and “The Future of Management is Cognitive” from Nikos Anerousis, IBM Research. They recognised that network/service management is moving towards “dark room + algorithms” (machine learning), but human will still have pivotal roles: referring/curating knowledge and training systems to solve complex problems. I then went to the security session and SDN session for the rest of the day. An Ericsson talk discussed COMPA (Control, Orchestration, Management, Policy, and Analytics) adaptive control loop as an automation pattern for carrier networks, a good work to follow if you do such high-level designs. There was an interesting paper on addressing the shortage of scarce and expensive TCAM memory on SDN switches using “memory swap”. The idea is to employ the memory of SDN controller for least frequently used flow rules to free up TCAM space. Is it impractical, naive? I think there are scenarios where this solution will actually work well…

David Gorman from IBM kicked started the fourth day with his excellent keynote talk on “Making Blockchain Real for Business”. David shared his vision on a world of shared ledger, smart contract, privacy (certificate) and trust. He used auditing as one of the use cases to demonstrate the uniqueness of blockchain in tracking transactions (changes) in comparison to conventional database solutions. His talk then converged on a brief introduction of Hyperledger, a community effort on cross-industry blockchain technologies. I had a short and interesting discussion with David on the impact and use cases of blockchain in higher education. Ultimately, blockchain is merely a technology and not a solution (in fact, the same applies to SDN). I think it can be a key technology to enable cross-service end-to-end management but in many cases, a solution is not dictated by the technology but politics and regulations.

On the last day, I only stayed till lunch time before I had to leave to catch my flight. The highlight of the day is certainly Alex Galis (UCL)’s talk on Programmability, Softwarization and Management in 5G networking. He emphasised on the importance and impact of softwarization and network programmability, especially the quality of slice in future networks. I’d summarise his talk, blending in my own views, as autonomous, adaptive, and automated end-to-end resource management. Alex also spent a few slides concluding on the key challenges on network slicing, which are very helpful to new researchers in this field.

All in all, IM 2017 at Portugal has been a wonderful event (In fact, they’ve done so well that they also won Eurovision 2017). I am looking forward to its future iterations (NOMS and IM).

A middleware that aims at helping TV broadcasters to create and deliver immersive experiences

Congratulations to my MSc student Hussein Ajam, who just had a paper accepted by ACM TVX Work-in-Progress (WiP) track. His work was inspired by a collaboration with Rajiv and Matt at the BBC R&D on prototyping a solution to 1) assist TV producers authoring immersive experience for TV programmes and 2) orchestrate multiple (IoT) user devices at home to convey the sense of immersion through synchronised media playback. Hussein’s work was also briefly supervised by Marie-Jose Montpetit, a renowned Research Scientist at MIT Media Lab, as part of ACM TVX’s Mentoring Programme. Since I am chairing the WiP track, Hussein’s submission was handled by the general chair for conflict-of-interest and fairness, and I am very pleased to see the positive result, especially in a track with an acceptance rate of just above 50% (I will write a chair’s summary of the 10 papers accepted). For Hussein, there is still a lot of work to do for his ambitious work plan and I am sure he will enjoy the conference in June.

wippaper

Ajam, H., and Mu, M., A Middleware to Enable Immersive Multi-Device Online TV Experience, to appear in 2017 ACM International Conference on Interactive Experiences for Television and Online Video (TVX 2017) Work-in-Progress track, Hilversum, The Netherlands, 06/2017

Abstract:

Recent years have witnessed the boom of great technologies of smart devices transforming the entertainment industry, especially the traditional TV viewing experiences. In an effort to improve user engagement, many TV broadcasters are now investigating future generation content production and presentation using emerging technologies. In this paper, we introduce an ongoing work to enable immersive and interactive multi-device online TV experiences. Our project incorporates three essential developments on content authoring, device discovery, and cross-device media orchestration.

Enabling Rapid Experimentation of Contextual Network Traffic Management using SDN

The non-cooperative and unsupervised resource competition between adaptive media applications (such as Youtube and Netflix) leads to significant detrimental quality fluctuations and an unbalanced share of network resources. Therefore, it is essential for content networks to better understand the application and user-level requirements of different data flows and to manage the traffic intelligently. I am glad to have been part of a team of talented researchers which was one of the first to experiment Software defined networking (SDN)-assisted QoE-aware network management using physical OpenFlow network switches. SDN is a network paradigm that decouples network control from the underlying packet forwarding. Combined with Fog Computing and Network Function Virtualization (NFV), this opens compute locations that are close to the edge to enable intelligent network traffic management services (I also name this cognitive networking).

Following the publications [1,2,3] we made, there have been numerous requests to open-source our experimentation environment (named REF – Rapid Experimentation Framework) from the research community.  REF is an experimentation framework and a guide to building a testbed that together provides a blueprint for an SDN-based contextual network design facility. Contrasting to existing facilities that typically provide very detailed low-level control to just the network infrastructure, our work provides higher level abstractions of both the network and virtualisation infrastructures through orchestration, automating the creation, connection, running, and cleaning of nodes in an experiment. REF also provides abstraction over the network for making the creation of context aware traffic management applications as streamline as possible. Additionally, with the unique configuration using slicing and port multiplexing, REF can create much larger physical networks with limited hardware than its competitors. Finally, the entire REF framework can be used and modified by anyone without any kind of registration or subscription to a federation.

Needless to say, to “open-source” a framework is not a straight-forward task. Our source codes are pretty much meaningless if they are not well connected with well-configured hardware equipment and a comprehensive guideline of do’s and don’ts. We wanted to publish this tutorial-style guideline in an elite outlet (so more people can benefit from it) while keeping the writing style suitable for SDN beginners, and there is nothing more suitable for our work than the IEEE Communications Magazine.  Furthermore, because we are using HPE’s network switches (3800 and later 3810 series) as reference equipment (and we know for sure that the implementation of standards (such as OpenFlow) by the vendors is a deterministic factor), we must work with HPE to make sure our analysis and conclusions are accurate. Fortunately, Bruno Hareng, an SDN and Security Solution Manager at HPE, provided invaluable input to our work.

arch-ref-new
Framework for rapid SDN experimentatin 

The manuscript is set to describe the framework (shown in Figure above), covering the requirements of the framework then the purpose of each component within the system as well as the abstractions that it provides to the user. Next, the experiment testbed is detailed, providing a guide on how to construct your own virtualisation and network infrastructure for experimentation. After this, both use cases are described and used to show REF in operation, this includes a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL. Finally, the article goes into a discussion on interesting findings that arose during the creation and use of the system. The manuscript is now accepted by IEEE Communications Magazine for publication in a July 2017 issue:

Fawcett, L., Mu, M., Hareng, B., and Race, N., “REF: Enabling Rapid Experimentation of Contextual Network Management using Software Defined Networking”, in IEEE Communications Magazine, 2017

Abstract:

Online video streaming is becoming a key consumer of future networks, generating high-throughput and highly dynamic traffic from large numbers of heterogeneous user devices. This places significant pressure on the underlying networks and can lead to a deterioration in performance, efficiency and fairness. To address this issue, future networks must incorporate contextual network designs that recognise application and user-level requirements. However, designs of new network traffic management components such as resource provisioning models are often tested within simulation environments which lack subtleties in how network equipment behaves in practice. This paper contributes the design and operational guidelines for a Software-Defined Networking (SDN) experimentation framework (REF), which enables rapid evaluation of contextual networking designs using real network infrastructures. Two use case studies of a Quality of Experience (QoE)-aware resource allocation model, and a network-aware dynamic ACL demonstrate the effectiveness of REF in facilitating the design and validation of SDN-assisted networking.


References:

[1] Mu, M., Broadbent, M., Hart, N., Farshad, A., Race, N., Hutchison, D. and Ni, Q., “A Scalable User Fairness Model for Adaptive Video Streaming over SDN-Assisted Future Networks”, in IEEE Journal on Selected Areas in Communications. 34, 2168-2184, 2016. DOI: 10.1109/JSAC.2016.2577318

[2] Fawcett, L., Mu, M., Broadbent, M., Hart, N., and Race, N., SDQ: Enabling Rapid QoE Experimentation using Software Defined Networking, to appear in IFIP/IEEE International Symposium on Integrated Network Management (IEEE IM), Lisbon, Portugal, 05/2017

[3] Mu, M., Simpson. S., Farshad. A., Ni. Q., and Race. N., User-level Fairness Delivered: Network Resource Allocation for Adaptive Video Streaming (BEST PAPER AWARD) in Proceedings of 2015 IEEE/ACM International Symposium on Quality of Service (IWQoS), Portland, USA, 06/2015