BSc Computing/Business Computing Dissertation Topics

[This is a “work-in-progress” page which I update every now and then.]

A few topics are listed below. Some topics are configured along with industrial partners.  Each topic may have specific flavours: Research and Research & Development. R&D are suitable for all students while the Business Computing students may also opt for Research/design-only topic. You are also welcome to suggest other topics.

[BBC] A middleware API to enable the orchestration of multi-device experiences [R&D]

Essentially, this involves APIs/abstractions to enable an experience creator to build complex experiences such as a device having to wait for the completion of the playback on another device before it can proceed.  To realise this, one may provide primitives such as synchronisation barriers or distributed event mechanisms for the experience creator to express complex relationships between device presentations. For more details, contact me mu.mu@northampton.ac.uk

[BBC] A Domain Specific Language to express multi-device experiences [R&D]

Following the same design principle as the above topic, you’ll build a tool to allow experience creators to create/visualise experiences and generate a domain-specific language (DSL) to allow a collection of devices to ‘play’ the experience. A potential game changer in broadcasting / post-production studio. For more details, contact me mu.mu@northampton.ac.uk

[BBC] Build a synchronisation accuracy toolkit [R&D]

This project involves using an Arduino (www.arduino.cc), light sensors and audio inputs to determine the asynchrony between two devices playing the same stream. The toolkit might use DVB-CSS protocols to instruct the devices to playback test streams. The toolkit can become the most essential tool to enable new TV viewing experiences across multiple linked devices. For more details, contact me mu.mu@northampton.ac.uk

[BBC] Integrating headless Internet of Things devices in TV experiences [R&D]

This project involves identifying potential IoT devices that could be used for immersive TV-led experiences. The students could then build a generic infrastructure to discover IoT devices, their capabilities. For example, an app on a TV could discover different IoT devices and then generate bindings to the devices’ specific operations. We envisage a scenario where TV apps could discover any type of IoT device in the home and try to download a ‘driver’ for that particular device model so it can talk to it. For more details, contact me mu.mu@northampton.ac.uk

Waterside campus in virtual reality [D]

This project is part of a new initiative to create the entire Waterside campus in virtual reality and develop novel applications. You will be working on mesh objects produced by a professional media company (and approved by the architects) using Waterside site plans. Depending on your interest (development, design, or arts), a manageable project will be tailored and supervised by a corresponding staff member(s). You’ll have the opportunity to deploy your work on the latest VR equipment including Oculus Rift, Google Daydream, and FOVE.

Using multi-sensory media (Mulsemedia) to deliver immersive experiences [R&D]

Traditional multimedia applications have primarily engaged only two of the human senses – the aural and the visual – out of the five (or more?) possible. With recent advances in computational technology, it is now possible to talk of applications that engage other senses such as tactile, olfactory, and gustatory. The result, mulsemedia, may revolutionise the world of home entertainment and online retailing.   Ref1 Ref2

Synchronous e-learning platform (virtual classroom) [R&D, R]

With The University of Northampton moving towards “flip classroom” schema, a significant amount of learning materials will be delivered online prior to practical sessions on site. It is challenging to develop and integrate an e-learning platform that helps lecturers to deliver great learning experiences online. Your brave-new-ideas, research or prototype may help to shape the future of online learning. A potential technology to support this project is Augmented Reality. Ref1

Spatio-temporal media synchronisation [R&D]

The continuing growth in the mobile phone arena, particularly regarding device capabilities and ownership is having a transformational impact on media consumption. It is now possible to consider orchestrated multi-stream experiences delivered across many devices (think what you can do with 10 smartphones in the same room), rather than the playback of content from a single device. However, there are significant challenges in realising such a vision, particularly around the management of synchronicity between associated media streams. This is compounded by the heterogenous nature of user devices, the networks upon which they operate, and the perceptions of users. Ref1

Beyond HTTP/2! QUIC-based adaptive media streaming [R&D]

In this topic, you’ll study the future of the internet and new web protocols being experimented by Google and Facebook. The project will investigate and evaluate the usage of advanced transport options for the dynamic adaptive streaming over HTTP. Specifically, you’ll evaluate HTTP/2 and the Quick UDP Internet Connections (QUIC) protocol in the context of DASH-based services. The Hypertext Transfer Protocol Version 2 (HTTP/2) is available as RFC7540 within the standards track, and QUIC is currently developed as informational Internet draft providing a UDP-based secure and reliable transport for HTTP/2 including loss recovery and congestion control.  Ref1 Ref2

Spatial audio or/and video [R&D]

Spatial audio and video are the technologies that enable us to capture, distribute, and recreate a remote experience with a 360 degree of immersion. You’ll investigate the latest technologies and create a prototype that allows users to capture audio-visual content with different perspectives or directions, and synthesise the captured media content to form a unique immersive experience. Ref1

Online social TV web/native application [R&D]

If you see yourself as a web designer, this may be the project for you. You will be given the live data feed (TV programme EPG, IP audio/video streams, etc.) of a production IPTV service, which allows you to build a full-size social TV application. You may use any technologies suitable, such as HTML5/CSS3, AngularJS, Flexbox, etc. You may also choose to develop native applications (e.g., iOS, Android). Please notice that this is not a development-only project. Hence you will be expected to demonstrate the interaction design skills ( not necessarily graphics design but designs with usability and user experiences in mind ).

Affective computing [R&D]

Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena (Picard, MIT Press 1997). Emotion is fundamental to human experience, influencing cognition, perception, and everyday tasks such as learning, communication, and even rational decision-making. This project develops new technologies and theories that advance basic understanding of affect and its role in human experience. You may develop new applications to help people with disabilities, to help lecturers better understanding student engagements, or to empower interactive applications to recognise your emotion and make adjustment accordingly?  Ref1

Contextual/cognitive networking [R&D]

Conventional network nodes are programmed to only conduct low-level tasks such as packet forwarding. We argue that communication network should incorporate a deep understanding of user interactions, application characteristics, and other contextual information for network management such as access control, routing and resource allocations. Furthermore, a home network should dynamically adjust its configuration to reflect changes in user behaviour. A contextual network can greatly improve network efficiency, energy consumption, and user experiences. You are welcome to use tools such as NS2/NS3, Mininet, OpenNet, etc. to evaluation your work. Ref1

Machine Learning – Understanding user behaviours in online TV [R&D]

Understanding user interactions and user behaviours are becoming as fundamental as streaming content online for any online media services. For instance, Amazon, Netflix, and Google all track our activities and use machine learning tools to improve their recommendation engines and service designs. In this project, you will have access to a large dataset of user activities captured in an online IPTV platform. Your role is to (study and) use machine learning tools to model how users interact with the service, and propose solutions to better adapt the dynamic elements of a website to (latent) user preferences.

Helping me helping you: UoN computing department website [R(design)]

Designing a university departmental website is a challenging task. Often such websites are multi-purposed: 1) for perspective students to have a quick glance of the teaching and research activities, 2) for current students to catch up with the latest news and events, 3) to demonstrate the research and teaching excellence to the public or potential collaborators. In this project, you’ll redesign the UoN computer department website (computing.northampton.ac.uk) following an iterative interaction design and evaluation process.

Game with a Purpose (GWAP) for health [R&D]

Wouldn’t it be great to have fun playing a game while helping you becoming healthier? See Ref1 for some inspirations.

Software-defined security [R]

Software-Defined Security (SDS) is a type of security model in which the information security in a computing environment is implemented, controlled and managed by security software, not hardware. The benefits of SDS are many including Simplicity, Automation, Scalability and Flexibility, Cost Effectiveness, and Increased Security. Ref1

Life-logging and community storytelling [R&D]

Lifelogging is the process of tracking personal data generated by our own behavioural activities or automated media recordings.  It is an effective tool, used by many, to improve the quality of life. Community storytelling links individual lifelogging to form a network of statistics and knowledge, allowing automated recognition of social challenges and improved community experiences. Ref1


Interested in proposing something new? In general, my areas of interest include but not limited to: media processing, media retrieval, media orchestration, online video, social network analysis, context-aware recommendation, affective computing, quality of service/quality of user experience, Internet of multimedia things, software defined networking, network function virtualisation, security, anomaly detection, wearables, augmented reality, virtual reality, human-computer interaction, and interaction design. I am also interested in cross-disciplinary research and development.