I am a Master in Science, having graduated from Durham University with a First Class Honours degree.
MSci in Natural Sciences: Computer Science and Mathematics - 1st, 83/100
This page contains extensive information on my skills, background and experience for any of those who are interested (this page also thereforeacts as a map to my GitHub repositories):
- The technologies I have learnt over the years π»
- What I learnt at University π / Module Breakdown π©βπ»
- Career Breakdown π·ββοΈ
- Prior Education π
If you would like to find out anything more (or less, as this page is quite detailed) about me / get in contact, please check out my LinkedIn:
I studied Mathematics β Computer Science at Durham University π under the Natural Sciences program.
Within Natural Sciences, you are able to choose exactly what modules you study, from whichever faculties you choose, thereby removing many of the restrictions from "normal" degrees - it is a build-your-own-degree degree and allows you to freely pursue your curiosities within your chosen fields! I would highly recommend. For example, it has allowed me to complete both a Bachelor's and a Master's level Dissertation/Project in Computer Science which normally students are not allowed to do (as it uses too many university resources)!
I graduated as a Master in Science with a First Class Honours degree (83/100) in June 2022.
Additionally, according to the measure employed by Durham University, a Natural Sciences student is the most qualified of any student. β¨
- Zero-Shot Learning: Towards the Effortless Classification of Mystical Creatures: 77
- Poster (shown above), giving an overview of the problem space, the solution, and our results - alongisde a preview of ZSL-KG+'s interactive website.
- Deep dive Presentation into our solution and results.
- Full Academic Paper.
- Natural Language Processing: 91
- Blockchain and Cryptocurrencies: 84
- Advanced Computer Vision: 81
During final assigment we were tasked with transferring the style of a video from video-game-style into 70s movie style. To achieve this I trained a variety of Recycle-GANs from scratch, having to apply extensive state-of-the-art data selection and augmentation techniques using Albumentations to overcome the drastic overfitting in video-to-video style transfer models. Recycle-GAN is a video-to-video variant of Cycle-GAN that uses spatio-temporal constraints to leverage motion-information from successive frames to reduce perceptual modal-collapse. To train these, I annotated and trained an SVM on joint position confidences extracted using OpenPose to classify humans into different poses, which I then combined with Mask R-CNN to extract, segment and apply pixel-wise segmentation masks of humans to train different Recycle-GANs on different parts of the body (as per the assignment).
- Parallel Scientific Computing II [distributed memory programming / MPI / PETSc]: 89
- Classical beam theory discretisation (Euler-Bernoulli) [20%]: 81
- Distributed Memory Multi-grid Solvers (W-Cycles, V-Cycles, Jacobi Relaxation...) [80%]: 91
- Mathematical Finance [Stochastic Calculus + Integrals, Brownian Motion, Martingales, Financial Derivatives]: 56
Natural Sciences' Prize for Outstanding Level 3 Achievement awarded by the Board of Examiners.
- Combining Recent Advances in Reinforcement Learning for Super Mario Bros - R2D4-RL: 92, 6th in year. [PyTorch]
During my Dissertation, I surveyed the field of Reinforcement Learning and spent my project investigating how best to combine many recent advances from Reinforcement Learning. In doing so, I created an AI agent capable of teaching itself to play the game Super Mario Bros. to superhuman level, achieving new state-of-the-art performance within the field. This demo shows the agent playing Worlds 5 and 7, and was taken from the project's final presentation.
-
Deep Learning - pegasus-lightweight-gan: 104/100, 1st in year. [PyTorch]
The task was to create a Pegasus (a winged horse) by training generative models solely on a dataset (STL-10 or CIFAR-10) containing no winged horses. However, the datasets do contain some horses, birds and planes. After extensively experimenting with, designing and tweaking various flow-based generative models and VAEs (variational autoencoders) (as these were theoretically better suited to the task and I wanted to try something novel), I decided to apply the current state-of-the-art GAN (generative adversarial network) "lightweight" GAN to the task. Above you can see some interpolation within the latent space of a partially-trained model, and below you can see some of the final Pegasi I produced. Various of my Pegasi were used as exemplar feedback to the class.
-
Reinforcement Learning - R2ND4-Gravitar-RL: 100, 1st in year. [PyTorch]
We were tasked with creating a Reinforcement Learning agent to play the notoriously difficult Gravitar from the Atari-57 suite. I therefore decided to look for the current state-of-the-art Reinforcement Learning model for Atari (R2D2) and re-created it to the best I could with my limited hardware. I produced the best agent in the class, and my convergence graph was used as exemplar feedback to the cohort (one of two such graphs).
-
Parallel Scientific Computing I - multicore-n-planet-simulator: 90, 1st in year. [C++, vectorized and multicore programming]
Vectorized and multicore n-body simulator(s) written and extensively optimised in C++ for scalability to millions of particles/planets to be run on a single node of a supercomputer.
-
Recommender Systems - CACBCF-Recommender-System: 92, 3rd in year.
I created a "Novel Context Aware Restaurant Recommender System Using Content-Boosted Collaborative Filtering" using a custom hybrid scheme on the Yelp Dataset. To interface with this Recommender System, I created a fully-fledged GUI where user can log in, register, submit new reviews and retrieve personalized restaurant recommendations using Content Aware Content-Boosted Collaborative Filtering. In simplified terms, the recommender would look for restaurants in your vicinity (context-aware, as the recommender interfaces with location services) that match the type of restaurants you like (content-based) and that are liked by others with similar tastes to you (collaborative filtering). Due to being highly optimized, the recommender can retrain itself almost instantly when a user submits a new review or on alteration to a prediction. This means the user can interact with the recommender and see the recommendations change in real-time.
-
Cryptography (unofficially, by special assessment over summer): 100
- Probability: 87, 19th in year.
- Decision Theory: 91, 3rd in year.
- Operations Research: 75, 24th in year. [Linear and Non-Linear Optimization Mathematics]
Natural Sciences' Prize for Outstanding Level 2 Achievement awarded by the Board of Examiners.
-
Programming Paradigms: 98, 3rd in year. [C, Haskell & Java]
- Conway's game-of-life: 97 [C]
-
Software Methodologies: 86, 12th in year.
- Machine Learning - machine-learning-project: 92
- AI Search - Lin-Kernighan-TSP: 97
- Image Processing: 89 [OpenCV]
- Computer Graphics - WebGL-from-scratch: 88 [WebGL]
Note: This scene was constructed entirely from scratch, using no external libraries. This includes all shaders, objects, animations, lighting, texturing etc...
-
Networks and Systems: 88, 14th in year.
-
Cyber Security: 96
For this assignment, we were given a sandbox Linux Virtual Machine of a fake company with ~25 assorted vulnerabilities. We had to find, analyze and mitigate up to 18 of these vulnerabilities. This was an incredibly creative and amazing coursework, which I I'm very grateful to Dr Chris G. Willcocks for creating. β€
-
Compiler Design - first-order-logic-parser: 97
This demo shows the Python script first reading in a file of symbols and first-order logic, then generating a formal grammar of terminal/non-terminal symbols and production rules, and finally attempting to parse the inputted logic output (giving detailed logs if the formula was incorrect, or producing a full parse tree if it was correct). Note: In this demo, the parse tree output is actually shown before the logs and grammar.
-
Distributed Systems: 70
-
Networks - socket-level-message-board: 91
-
- Complex Analysis: 78
- Analysis in Many Variables: 70
- Algebra: 78
- Computation Thinking: 76
- Error Correcting Codes: 100
- Modeling with Graphs: 100
- Bioinformatics: 96
- Computer Systems: 77
- Digital Electronics
- Operating Systems
- Databases (Relational)
- Computer Architectures:
- Little Man Computer : 71 [Assembly/Machine Code]
This demo shows a 99 mailbox assembly code designed to convert an input integer from any input base to any given output base (this was one of my favorite courseworks ever!). Specifically, this demo shows the "CPU" converting the number 343 from base 10 to base 5 (with output 443). Note: there is a lot of computation cut out of this gif!
- Little Man Computer : 71 [Assembly/Machine Code]
- Linear Algebra: 85
- Calculus and Probability: 75
- Analysis: 83
- Programming and Dynamics: 74
- Captain of College Squash B and C teams.
- St Cuthbert's Society Darts As and Cs, Table Tennis As, Squash As, Bs and Cs & Pool Ds.
For more details on my roles, and non-software-engineering-specific experiences, see my LinkedIn.
- Delta - Forward Deployed Engineer / Tech Lead.
- Extreme Blue Intern - 3 months:
- Worked in an autonomous team of four students to design, prototype and deliver a solution to tackle a problem of our choice within the Retail Industry for a well-known fashion retailer using hybrid-cloud technology.
- Despite reduced development time, I delivered a fully-fledged MVP and a business case to take forward to the CTO, of an idea that both the Retail Director and Head of Operations & Omnichannel Transformation expressed they thought had real promise within the industry.
- IBM predicts our solution will save the company Β£37m p/a by halving customer returns and increasing customer loyalty.
- Submitted a US patent application covering our algorithm.
- I personally led the back-end Agile development team of David Shipman and myself, which required us to learn/use JavaScript/Node.js, Apollo-GraphQL, MongoDB and IBM Cloud Object Storage. This required envisioning any functionality that the front-end would need by completion and delegating tasks accordingly. I completed the back-end three weeks ahead of schedule.
- I then joined the front-end team, where I used React and Ionic to build a cross-platform customer-facing phone application.
- Despite having never touched React or Ionic before, I quickly became the go-to member to solve the hardest React/Ionic bugs and blockers the team encountered. I was incredibly proud of this feat.
- I also decided to integrate our back-end directly with the client's website for search and product functionality to exemplify to my client the "pluggability" and ease of integration of our design schema.
- In week 4, our well-known grocery client pulled out. We therefore had to completely pivot on the past three weeks work and transfer the insights gained to a different sector. After an initial shock period, we iterated our Design Thinking to produce an even more impactful solution.
- This was a valuable insight into the real world of working with clients whose needs may drastically change at a momentβs notice.
- Computer Science Demonstrator / Teaching Assistant - 2 years:
- During my 3rd and 4th years, after having achieved 98 in Programming Paradigms, I was recruited to lead three weekly Programming Paradigms practical groups. During these sessions I taught and assisted a group of (~20) 2nd year university students to write code in C, Java and Haskell.
-
Data Science Consultant - 5 months:
- Main Projects:
- Working on the discovery, scouting and subsequent analysis of the applicability of various data sources to be integrated into global footfall/busyness prediction models ~ 3 months.
- Started up and have since been managing Lanterne's open-source Opening Hours retrieval project poi-info-scraper. Subsequently built upon the open-source codebase internally to increase its global coverage drastically, by integrating it with a hybrid Bing and Yelp scraper I built, to then be integrated directly into our AWS pipeline ~ 1.5 months.
- Main Projects:
-
Machine Learning Consultant - 3 months:
- After having been offered a promotion towards the end of my summer of work at Lanterne, I began alongside a small team of Agile Data Scientists during my first term of 3rd year at Durham University. During this time, I was primarily applying Deep Learning to construct a Convolutional Neural Network to forecast mobile location data throughout the UK to be used in global footfall/busyness prediction models (Crowdless). For an example of the work I undertook, see my fork of ST-ResNet-Pytorch.
-
6th Form Academic Scholar.
-
GCSEs: 10 A*s 1 A (with French, Spanish and Latin as languages).
-
A Levels:
- A* in Maths (590/600 UMS) - 2nd in year,
- A* in Further Maths (573/600 UMS) - 2nd in year,
- A* in Chemistry (251/270 UMS with an A* boundary of 237/270) - 1st in year,
- A* in Physics (244/270 UMS, with an A* boundary of 219/270) - 1st in year.
-
Standalone Qualifications:
- "Gold with Distinction" for my "Independent Study" paper on Special Relativity (Norwich School's equivalent of a more in-depth EPQ).
- A paper on Special Relativity, which I submitted for the Independent Study Program ran by Norwich School. The paper explores the many ways it is/could be either possible or impossible to travel faster than "the speed of light", that being either c, when propagating through a vacuum, or some fraction of c when light is propagating through another, optically denser medium. I received a Gold with Distinction for my submission.
- A (97/100) in OCR Additional Mathematics (boundary was ~59/100).
- "Gold with Distinction" for my "Independent Study" paper on Special Relativity (Norwich School's equivalent of a more in-depth EPQ).
-
Activities and Societies: Young Leader and Scout at the 8th Norwich Sea Scouts. Karate, Squash, Tennis, Table tennis, School Ski Team, Running, Rugby.
- Primary school education at an all Italian speaking local school in Loreto Aprutino, Abruzzo.
- Am therefore fluent in Italian.
- Activities and Societies: Basketball, Skiing, Swimming.