Smart Campus project – part 2

In Part 1, I introduced the architecture and shown some sample charts of my Smart campus project. The non-intrusive use of WIFI data for campus services and student experience is really cool.

As we are approaching the start of university term, I have reduced time to work on this project. So my focus was to prototype a “student-facing” application that visualise live building information. The idea is students can tell which computing labs are free, where to find quiet study areas or check if student helpdesk is too busy to visit. Security team can also use that to see if there is any abnormal activities at certain time of the day.

The chart below shows a screenshot of a live floor heatmap with breakdowns of lecture rooms (labelled white), study areas (also labelled white), staff areas (labelled black), and service areas (labelled grey).

floor heatmap (not for redistribution)

Technically the application is split into three parts: user facing front-end (floor chart), data feed (JSON feed) and backend (data processing). The data feed layer provides the necessary segregation so that user requests don’t trigger backend operations directly.

The front-end chart is still based on Highcharts framework though I needed to manually draw the custom map using Inkscape based on actual floor map, export the map as SVG, convert it to map JSON using Highcharts’ online tool. At the same time, the mapping between areas (e.g., lecture rooms) and their corresponding APs must also be recorded in the database. This is a very time consuming process that requires a bit of graphic editing skills and a lot of patience.

The backend functions adopt a “10 minute moving average window” and periodically calculate the AP/area device population to generate data for each area defined in the custom floor map. I also filtered out devices that are simply passing by APs to reduce noise in data (e.g., a person walking along the corridor will not leave a trace). The data is then merged with the floormap JSON to generate the data feed every few minutes in static JSON file format.

A finishing touch is the chart annotation for most floor areas. I use different labelled colours so areas of different functionalities can be clearly identified.

TB to Unity – A small software tool for creative VR artists

[I am still learning Unity/abstract art. Do let me know if you spot me doing anything silly.]

References:
https://github.com/googlevr/tilt-brush-toolkit
https://docs.google.com/document/d/1YID89te9oDjinCkJ9R65bLZ3PpJk1W4S1SM2Ccc6-9w/edit
https://blog.google/products/tilt-brush/showcase-your-art-new-ways-tilt-brush-toolkit/

Google Tilt Brush (TB) is a virtual art studio that enables artists to create paintings in VR. It’s packed with features for editing and sharing. As physical artworks require a gallery for exhibition, TB VR paintings is in need of a specialised environment for their audiences. Game engines such as Unity is a natural choice since they offer a wide spectrum of tools to help installing artwork, controlling the environment, and choreographing interactions with the audience. You can also “bake” the outcomes for different platforms.

The standard workflow to port an artwork to Unity is: Export TB artwork as a FBX file -> Import FBX into Unity and add it to the scene -> Apply Brush material to mesh using the content provided by the tiltbrush-toolkit. This work well until you want to do anything specific with each brush stroke such as hand-tracking to see where people touch the artwork (yes, its ok to touch! I even put my head into one to see whats inside). In Unity, artworks are stored in meshes and there is no one-to-one mapping between brush stroke and mesh. In fact all strokes of the same brush type are merged as one big mesh (even when they are not connected) when they are exported from TB. This is (according to a TB engineer) to make the export/import process more efficient.

The paint below was done using only one Brush type “WetPaint” in spite of different colour, patterns and physical locations of the strokes. So In the eye of Unity, all five thousands brush strokes is one mesh and there is nothing you can do about it as it’s already fixed in FBX when the artwork was exported from TB. This simply won’t work if an artist wants to continue her creative process in Unity or collaborate with game developers to create interactive content.

Abstract VR Painting Sketch Copyright@Alison Goodyear

To fix it, we have to bypass TB’s FBX export function. Luckily, TB also exports artworks in JSON format. Using the python-based export tools in tiltbrush-toolkit, its possible to convert JSON to FBX with your own configurations. Judging from the developer comments in the source code, these export tools came before TB supported direct FBX export. Specifically, the “geometry_json_to_fbx.py” script allows us to perform the conversion with a few useful options including whether to merge strokes (“–no-merge-brush”). However, not merging strokes by brush type led to loose meshes in Unity with no obvious clue of their brush type. With some simply modifications to the source code, the script exports meshes with brush type as prefix in mesh names as shown below. The setup makes it easy to select all strokes with the same brush type, lock, and apply brush materials in one go. I also added a sequence number at the end of the mesh name (starting from 1000). Occasionally, we put multiple artworks in the same Unity scene, like a virtual gallery. It is then important to be able to differentiate meshes from different artworks in the asset list. This is done by appending the original JSON filename in the mesh name (“alig” in the picture below). At the moment, we are working on understanding how audience interact with paint of different colours, so the colour of stroke (in “abgr little-endian, rgba big-endian”) is also coded for quick access in Unity. As a whole, the mesh naming scheme is: BRUSHTYPE_STARTINGCOLOUR_JSONNAME_ID. All these are based on some simple hacking of the “write_fbx_meshes()” and “add_mesh_to_scene()” function.

Coding metadata of brush strokes in their names is sufficient in most cases, though there are experiments where we need more detailed / find-grained brush information. As far as colour is concerned, it is imperative to log the “colour array” since the colour may change along the stroke. In our mesh names, we only the starting colour. To support better data driven research, we also export the full stroke metadata as a JSON file along the FBX. The schema is:

{‘fbxname’:FBXNAME,
‘fbxmeta’:
[{‘meshname’:MESHNAME,
‘meshmeta’:
{‘brush_name’:BRUSHNAME,
‘brush_guid’:BRUSH_GUID,
‘v’:V, #list of positions (3-tuples)
‘n’:N, #list of normals (3-tuples, or None if missing)
‘uv0’:UV0, #list of uv0 (2-, 3-, 4-tuples, or None if missing)
‘uv1’:UV1, #see uv0
‘c’:C, #list of colors, as a uint32. abgr little-endian, rgba big-endian
‘t’:T, #list of tangents (4-tuples, or None if missing)
‘tri’:TRI #list of triangles (3-tuples of ints)
}
},{},…]
}

The modified script is available here: https://github.com/MrMMu/tiltbrushfbxexport

Another example of Alison’s “Peacock” painting imported in Unity:

Copyright@Alison Goodyear

Research on the fairness of networked multimedia to appear in FAT/MM WS at ACM Multimedia 2019

A job well done for a first-year PhD student.

SDCN: Software Defined Cognitive Networking

AwesomeScreenshot-www-acmmm-org-2019--2019-08-12_11_44.png

Basil, A. et al., A Software Defined Network Based Research on Fairness in Multimedia, FAT/MM WS, 27th ACM International Conference on Multimedia (ACM MM 2019), France. 10/2019

The demand for online distribution of high quality and high throughput content has led to a non-cooperative competition of network resources between a growing number of media applications. This causes a significant impact on network efficiency, the quality of user experience (QoE) as well as a discrepancy of QoE across user devices. Within a multi-user multi-device environment, measuring and maintaining perceivable fairness becomes as critical as achieving the QoE on individual user applications. This paper discusses application- and human-level fairness over networked multimedia applications and how such fairness can be managed through novel network designs using programmable networks such as software-defined networks (SDN).

Screenshot 2019-08-12 at 11.45.55.png

View original post

“Disruptive” VR art? A quick update

Lovely sunset view from Lowry

Our visited to the TVX 2019 has been a tremendous success. Murtada and Alison’s lightning talks were well received and we managed to have two demos in the BBC Quay House on the last day.

Alison’s VR painting demo had a great start then took an interesting turn and became a community art creation exercise. Audience with different background built on each other’s creations and the artwork just kept growing in multiple dimensions (no canvas to limit you and no one is afraid of making “digital mess”). This has really inspired us to look into collaborative VR art more closely.

Alison’s VR Painting demo (trust me, i tried tidying the desk)

Murtada’s gaze-controlled game has seen a lot of visitors who “always wanted to do something with eye-tracking in VR”. We are already working on the third version of the game. We have changed the strategy from “building a research tool that contains games elements” to “building a professional VR game with research tool integrated”. The game will also be part of a use case for our Intelligent Networks experiments.

Murtada’s gaze-control game demo

Immediately after TVX, we also organised a workshop at Merged Futures event on our campus. Our audience are mainly SMEs and educators from Northants and nearby counties.

VR arts and education workshop at Merged Futures 2019, UON

Slides from the workshop:

Smart Campus project – part 1

Most research in communication networks are quite fundamental such as sending data frame from point A to point B as quickly as possible with little loss on the way. Some networking research can also benefit communities indirectly. I recently started a new collaboration with our University IT department on a smart campus project where we use anonymised data sampled from a range of on-campus services for service improvement and automation with the help of information visualisation and data analytics. The first stage of the project is very much focused on the “intent-based” networking infrastructure by Cisco on Waterside campus. The SoTA system provides us with a central console and APIs to manage all network switches and 1000+ wireless APs. Systematically studying how user devices are connected to our APs can help us, in a non-intrusive fashion, better understand the way(s) our campus are used, and use that intelligence to improve our campus services. Although it’s possible to correlate data from various university information systems to infer ownership of devices connected to our wireless networks, my research does not make use of any data related user identity at this stage. Not only because it is unnecessary (we are only interested in how people use the campus as a whole), but also because how privacy and data protection rules are implemented. This is not to say that we’ll avoid any research on individual user behaviours. There are many use cases around timetabling, bus services, personal wellbeing and safety that will require volunteers to sign up to participate.

This part 1 blog shares the R&D architecture and some early prototypes of data visualisation before they evolve into something humongous.

A few samples of the charts we have:

Wireless connected devices in an academic building with breakdowns on each floor. There is a clear weekly and daily pattern. We are able to tell which floors are over or under-used and improve our energy efficiency / help students or staff finding free space to work. [image not for redistribution]
“Anomaly” due to fire alarm test (hundreds of devices leaving the building in minutes). We can examine how people leave the building from different areas of the building and identify any bottleneck. [image not for redistribution]
Connected devices on campus throughout a typical off-term day with breakdowns in different areas (buildings, zones, etc.). [image not for redistribution]
Heatmap of device connected in an academic building in off-term weeks. The heat strips are grouped in weekdays except an Open Day Saturday [image not for redistribution]
Device movements between buildings/areas. It helps us to understand the complex dependencies between parts of our infrastructure and how we can improve the user experience. [image not for redistribution]
How connected devices are distributed across campus in the past 7 days and the top 5 areas on each floor of academic buildings. [image not for redistribution]

So how were the charts made:

The source of our networking data is the Cisco controllers. The DNA centre offers secure APIs while the WLC has a well structured interface for data scraping. Either option worked for us so we have Python-based data sampling functions programmed for both interfaces. What we collect is a “snapshot” of all devices in our wireless networks and the details of the APs they are connected to. All device information such as MAC addresses can be hashed as long as we can differentiate one device from another (count unique devices) and associate a device across different samples. We think devices’ movements on campus as a continuous signal. The sampling process is essentially an ADC (analog to digital conversion) exercise similar to audio sampling. The Nyquist Theorem instructs us to take a minimum sampling frequency as least twice the highest frequency of the analog signal to faithfully capture the characteristics of the input. In practice, the signal frequency is determined by the density of wireless APs in an area and how fast people travel. In a seating area on our Learning Hub ground floor, I could easily pass a handful of APs during a minute long walk. Following the math and sampling from control centre every few seconds risks killing the data source (and unlikely but possibly entire campus network). As the first prototype, I compromised on a 1/min sampling rate. This may not affect our understanding of the movement between buildings that much (unless you run really fast between buildings) but we might need some sensible data interpolation for indoor movements (e.g., a device didn’t teleport from the third floor library to a fourth floor class room, it traveled via stairwell/lift).

Architecture (greyed out elements will be discussed in future blogs)

The sampling outcome are stored as data snippets in the format of Python Pickle files (one file per sample). The files are then picked up asynchronously by a Python-based data filtering and DB insertion process to insert the data in a database for data analysis. Processed Pickle files are archived and hopefully never needed again. Separating the sampling and DB insertion makes things easier when you are prototyping (e.g., changing DB table structure or data type while sampling continues).

Data growth [image not for redistribution]

With the records in our DB growing at a rate of millions per day, some resource intensive pre-processing / aggregation (such as the number of unique devices per hour on each floor of a building) need to be done periodically to accelerate any following server-side functions for data visualisation, reducing the volume of data going to a web server by several orders of magnitude. This is at the cost of inserting additional entries in the database and risking creating “seams” between iterations of pre-processing but the benefit clearly outweighs the cost.

The visualisation process is split into two parts: the plot (chart) and the data feed. There are many choices for professional-looking static information plotting such as Matplotlib and ggplot2 (see how the BBC Visual and Data Journalism team works with graphics in R). Knowing that we’ll present the figures in interactive workshops, I made a start with web-based dynamic charts that “bring data to life” and allow us to illustrate layers of information while encouraging exploring. Frameworks that support such tasks include D3.js and Highcharts (a list of 14 can be found here). Between the two, D3 gives you more freedom to customise your chart but you’ll need to be a SVG guru (and a degree of artistic excellence) to master it. Meanwhile, Highcharts provides many sample charts for you to begin with and the data feed is easy to programme. It’s an ideal tool for prototyping and only some basic knowledge of Javascript is needed. To feed structured data to Highcharts, we pair each chart page with a PHP worker for data aggregation and formatting. The workflow is as follows:
1) The client-side webpage loads all elements including the Highcharts framework and the HTML elements that accommodates the chart.
2) A JQuery function waits for the page load to complete and initiates a Highcharts instance with the data feed left open (empty).
3) The same function then calls a separate Javascript function that performs an AJAX call to the corresponding PHP worker.
4) The PHP worker runs server-side code, fetches data from MySQL, and performs any data aggregation and formatting necessary before returning the JSON-encoded results back to the front-end Javascript function.
5) Upon receiving the results, the Javascript function conducts lightweight data demultiplexing for more complex chart types and set the data attribute of the Highcharts instance with the new data feed.
For certain charts, we also provided some extra user input fields to help dealing with user queries (e.g., plot data from a particular day).

Data science and network management at IEEE IM 2019, Washington, D.C.

IEEE IM 2019 – Washington DC, USA (link to papers)
Following IM 2017 in the picturesque Lisbon, one of the most beautiful cities in Europe, this year’s event was held in the US capital city during its peak cherry blossom season.

The conference adopted the theme of “Intelligent Management for the Next Wave of Cyber and Social Networks”. Besides the regular tracks, the five-day conference features some great tutorials, keynotes and panels. I have pages of notes and many contacts to follow up.

A few highlights are: Zero-touch network and service management (and how it’s actually “touch less” rather than touchless!), Huawei’s Big Packet Protocol (network management via packet header programming), DARPA’s Off-planet network management (fractionated architectures for satellites), Blockchain’s social, political, regulatory challenges (does not work with GDPR?) by UZH, Data science/ML for network management from Google and Orange Labs (with some python notebooks and a comprehensive survey paper of 500+ references.) and many more. I am hoping to write more about some of them in the future when I have a chance to study them further. There are certainly some good topics for student projects.

Since I am linked to both the multimedia/HCI and communication network communities, I have the opportunity to observe different approaches and challenges faced by these communities towards AI and ML. In multimedia communities, its relatively easy to acquire large and clean datasets, and there is a high level of tolerance when it comes to “trial and error”: 1) No one will get upset if a few from a hundred image search results are not accurate and 2) you can piggy-back some training module/reinforced learning on your services to improve the model. Furthermore, applications are often part of a closed proprietary environment (end to end control) and users are not that bothered with giving up their data. In networking, things are not far from “mission impossible”. 95% accuracy in packet forwarding will not get you very far, and there is not much infrastructure available to track any data, let alone making any data open for research. Even when there are tools to do so, you are likely to encounter encryption or information that is too deep to extract in practice. Also, tracking network data seems to attracts more controversy. We have a long and interesting way to go.

Washington, D.C. is surrounded by some amazing places to visit. George Washington’s riverside Mount Vernon is surely worth a trip. Not far from the Dulles airport is the Great Falls Park with spectacular waterfalls on Potomac river that separate Maryland and Virginia. Further west is the 100-mile scenic Skyline Drive and Appalachian Trail in Shenandoah National Park.

We are taking VR and art research to ACM TVX 2019

I have been a regular visitor of ACM TVX since it first became an ACM sponsored event in 2014 (previously known as EuroITV). This year, the conference will be held at MediaCityUK, Salford in early June. We’ll bring two pieces of early-stage research to Salford: understanding the user attention in VR using gaze-controlled games by Murtada Dohan (a newly started PhD candidate), and a demo of abstract painting in VR by a fine art artist Dr Alison Goodyear. You might have guessed that we have plans to bring these two together and experiment with new ways of content creation and audience engagement for both the arts and HCI communities.

Links to:


Dohan, M. and Mu, M., Understanding User Attention In VR Using Gaze Controlled Games.
Abstract: Understanding user’s intent has a pivotal role in developing immersive and personalised media applications. This paper introduces our recent research and user experiments towards interpreting user attention in virtual reality (VR). We designed a gaze-controlled Unity VR game for this study and implemented additional libraries to bridge raw eye-tracking data with game elements and mechanics. The experimental data show distinctive patterns of fixation spans which are paired with user interviews to help us explore characteristics of user attention.

Goodyear, A. and Mu, M., Abstract Painting Practice: Expanding in a Virtual World
Abstract: This paper sets out to describe, through a demo for the TVX Conference, how virtual reality (VR) painting software is beginning to open up as a new medium for visual artists working in the field of abstract painting. The demo achieves this by describing how an artist who usually makes abstract paintings with paint and canvas in a studio, that is those existing as physical objects in the world, encounters and negotiates the process of making abstract paintings in VR using Tilt Brush software and Head-Mounted Displays (HMD). This paper also indicates potential future avenues for content creation in this emerging field and what this might mean not only for the artist and the viewer, but for art institutions trying to provide effective methods of delivery for innovative content in order to develop and grow new audiences.

Copyright belongs to Dr Alison Goodyear