Donec eget ex magna. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fergiat. Pellentesque in mi eu massa lacinia malesuada et a elit. Donec urna ex, lacinia in purus ac, pretium pulvinar mauris. Curabitur sapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique.
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis dapibus rutrum facilisis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Etiam tristique libero eu nibh porttitor fermentum. Nullam venenatis erat id vehicula viverra. Nunc ultrices eros ut ultricies condimentum. Mauris risus lacus, blandit sit amet venenatis non, bibendum vitae dolor. Nunc lorem mauris, fringilla in aliquam at, euismod in lectus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. In non lorem sit amet elit placerat maximus. Pellentesque aliquam maximus risus, vel venenatis mauris vehicula hendrerit.
-
Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fersapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique lorem ipsum dolor.
+
+
Overview
+
For Big Data Research Day 2016 at Boston College, I built a demonstration to breakdown how smartphone data is sorganized, streamed and processed in realtime for apllications such as mobile VR. Here is the resulting paper from this research. By Ryan Reede and Cam Lunt on Github here.
+
Smartphones and tablets have incredible hardware built into their systems, but more often than not, the software built for the devices doesn’t tap into it. This project seeks to explore the methods in which massive amount of data points from a mobile sensor can be reliably and quickly reported to a remote server. Thus, with our software we set out to help the layman understand the practical applications of accessing smartphone data streams by reliably transmitting it from a mobile device to a processing service. From mobile Virtual and Augmented Reality, UI/UX research, gaming, navigation and more the applications for transmitting information from mobile devices to servers are endless. We wanted to be able to process and visualize the data coming off the device wirelessly. From the get-go, a number of problems became very clear to us.
+
Understanding the Data
+
Of the four V’s that describe the difficulty in managing and working with ‘Big Data’ (volume, variety, veracity and velocity), velocity was the biggest hurdle for us to overcome. The Android tablet we were working with was designed for applications such as mapping interiors thanks to its extremely precise sensors and a 2.3Ghz, quad-core mobile processor. As a byproduct of such powerful hardware, our model was capable of producing a ton of data in very little time. In addition to the speed in which the data was being offloaded from the tablet, we needed to find a reliable way to transport the data stream object on the android device to a central server where it can be processed and/or visualized such that a client with limited knowledge of data streams, linear transformations and euler coordinates could understand what the data coming off the tablet represented. Creating a proper visualization of the 3D axis, and transformations became the final major issue for us to tackle given the mentioned properties of our data.
+
+
+
Although the Project Tango device is capable of producing nearly 250,000 data points per second, we understood the complexity associated with processing a live data streams. Additionally, we didn’t want to clog our project with any extraneous data that wouldn’t be vital to our end goal of helping a non-technical person understand the practical applications of such sensor data. Our work focused on the Rotational data that came in in 4 parts (quaternion) from the tablet. More info on a quaternion (from cprogramming):
+
A quaternion represents two things. It has an x, y, and z component, which represents the axis about which a rotation will occur. It also has a w component, which represents the amount of rotation which will occur about this axis. In short, a vector, and a float. With these four numbers, it is possible to build a matrix which will represent all the rotations perfectly, with no chance of gimbal lock.
+
+
Programming Paradigm
+
+
+
+
There were multiple layers of communication involved in reliably transmitting the sensor data from the tablet to the processing server. Due to the nature of Kafka, and the size of the Kafka library, mobile devices are not recommended to be used as Kafka producers. Kafka is intended to permeate and manage messages, but not necessarily be the the communication agent between devices. For that reason all of the sensor readings were sent through a socket from the mobile device to a host Java instance on the remote server. One the host Java instance received the sensor reading it writes the message to a Kafka topic. By writing to the Kafka topic as soon as the sensor data reaches the server we ensure that no messages get lost, and that we are able to create a robust stream. The Kafka consumer is a Jetty server process running on the same remote server. The Jetty server consumes Kafka messages from the the sensor data topic and relays those messages to the frontend javascript instance through a Websocket. Kafka consumers cannot be implemented in frontend javascript, so this paradigm must be used. Once the javascript instance, running in an observer’s browser receives the sensor data it uses it to update a 3D graphic on the screen. Using the producer consumer paradigm allows us to have a very generalizable solution for multiple streams and the system could easily be extrapolated to work to receive multiple different types of data from the tablet. Using a queue based paradigm allows for elasticity in the consumption of messages; if the end of the line of communication slows down or pauses the system is safe due to Kafka’s permeation of data. The durability of the system ensures no messages get lost and that the system is pause tolerant. Below is the Server class that sends Kafka KeyedMessages through the socket.
+
+
Analysis and Conclusion
+
Although our project did not leverage machine learning algorithms or make predictions of any sort, it was still an extremely valuable exercise for a number of reasons. Primarily, we became much more proficient in working with data streams, streaming objects and how they should be dealt with when communicating between servers and layers. In many enterprise instances of Apache Spark, a live data stream is coming in; rarely a backlog of data in a clean .csv file. Likewise, we familiarized ourselves with Kafka, an industry standard tool for breaking data streams into individually analyzable and fault tolerant chunks. Our animation ended up inducing a noticeable amount of lag time, but this too taught us a valuable lesson: that Kafka is not optimized for realtime data visualization, and that the flow of data through a system should occur through as few layers as possible. Had the entire development process gone a little smoother and quicker, it would have been nice to implement some sort of data analysis. If there were more time, the point in our system architecture where the data is sent to the browser would have been the most appropriate place to add an instance of Spark into the mix as the data queuing from Kafka was already set in place. Additionally, we could have moved the data that gets sent to the browser come directly from the tablet instead of the Kafka channel so it could have been much smoother.
Donec eget ex magna. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fergiat. Pellentesque in mi eu massa lacinia malesuada et a elit. Donec urna ex, lacinia in purus ac, pretium pulvinar mauris. Curabitur sapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique.
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis dapibus rutrum facilisis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Etiam tristique libero eu nibh porttitor fermentum. Nullam venenatis erat id vehicula viverra. Nunc ultrices eros ut ultricies condimentum. Mauris risus lacus, blandit sit amet venenatis non, bibendum vitae dolor. Nunc lorem mauris, fringilla in aliquam at, euismod in lectus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. In non lorem sit amet elit placerat maximus. Pellentesque aliquam maximus risus, vel venenatis mauris vehicula hendrerit.
-
Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fersapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique lorem ipsum dolor.
+
Overview
+
For Business Intelligence and Analytics (Fall '14) with Prof. Ransbotham, our semester-long group project was tasked with analyzing and making data-based predictions on the Yelp Academic dataset. Below is the portion of the project I worked on that decided how we would implement our own rankings feature for restaurants and attractions.
+
(disclaimer: I knew nothing about Machine Learning at the time)
+
+
Good or Popular?
+
We needed to determine how to factor both popularity of a restaurant along with its average review to find the [quantitatively speaking] best food in Austin. With 5 possible stars (in 0.5 steps increments) to judge an overall dining experience using a simple upvote formula such as those found on Instagram or Reddit would not work well. Although heavily generalized, the idea there is to take a post-view count and divide it by the number of upvotes to determine how good a post is, but this only works with a binary like or non-like voting system. We have 5 (or 10 really) stars to use to determine how ‘good’ a restaurant is so a search on the web to find some sort of weighted rank to factor this in ended up turning in some great answers.
+
+
On Math Stack Exchange an answer showed how to use the Bayesian Approach to determine this sort of weighted rank. We applied this formula to our Yelp data as it was for testing, but then applied some tweaks to the weights to make the results more reasonable for our data. For instance, in one test we did, the popularity of the restaurant did not get enough recognition as 5.00 star average restaurants with under 5 reviews were ranked higher than 4.5 star average restaurants with over 50 reviews. Even after some heavy modification, we struggled to find a system that generalized well for all of our data. It was still clear to us however, that using this rank-based aggregate score based on multiple factors was the right way to approach this. Using R, we computed this data and adjuseted the dataframe to store these new values. Once we had this value computed, we looked for ways to visualize this all in a meaningful way.
+
+
Data Visualization
+
Since we were working with data that so heavily dependent on location, finding a way to present our data with mapping in mind was key. We needed a simple solution to be able to let the valuable lat./long. data show how location can affect rankings. Tableau wound up being the tool we used to get around this. Tableau was incredibly intuitive, and managed to recognize our input .csv file as geographic and defaulted to a map view. To make this visualiation even more telling, we adjusted the parameters of the map to get both the scale of the points and the color temperature of the points to reflect our data properly. The scale ended up showing how many people had reviewed the location and the darker the red of the point was a higher score on the Bayesian weighted rank scale.
During my summer internship at a company called Sync On Set (they just won an Emmy!), I got to know Derek Switaj, the other intern also from Boston College. As it would turn out, Derek was an aspiring writer and he came to me at the end of the Summer with a pilot of a series he was looking to bring to life. I was skeptical at the time because I know as well as anyone in Hollywood that it takes more than just a script and a camera to bring something incredible to life. The premise was House of Cards, but taking place on a college campus. I liked it a lot, but doing a full eight-episode series was not how I envisioned spending my junior year of college. Come the start of the school year, I told my friend Max Prio (we ran a production company on campus at the time) and he felt the same way I did. It was clear that Derek had the fire in him to make this happen, but it would simply take too much time when we had a backlog of paid projects to get cracking on as our production company was finally gaining traction. Initially the plan was this: We’d help Derek make a legit pilot, teach him everything we knew about the technical stuff (cinematography, audio, post etc.) and then he could use that knowledge and the pilot as a bargaining chip to bring in a new crew from the film department.
- https://vimeo.com/modofcards
+ We started with a timelapse intro that we shot throughout Boston. In inital casting calls we needed something to draw talent in and prove that we were legit. I think it worked. Before long, Derek was scheduling call backs in his converted living room. Max and I went along with it– this was already more legit than any narriative piece we had ever worked on. Come October, we had 20 speaking roles casted and Derek had already written episodes 2 and 3. Max and I began to scout locations and storyboard the pilot. Jess Barbaria came along and started to market the project and somehow got over 1,000 facebook fans for Mod of Cards.
+
+
+
+ The pilot was to be about 30 minutes long and had around 20 scenes in it. To get it shot and cut before the semester break deadline we set for ourselves was a tall order but we managed to get it done by shooting around 4x per week and cuting scenes together in our spare time. It was exhausting but a total blast. We released the first episode during finals and expected that to be it. Derek would find a new crew to take it over and [somehow] get five more episodes produced.
+
The pilot was good but not great. Somehow it still managed to get seen over 10,000 times in just a few months and Max and I realized that without us behind the camera this project was about to die off. Without ever stating it explicity, Derek knew all along that we wouldn't just do the pilot and bail. The cast was great, the crew was loyal and we were working on something awesome that never had been done before. Max and I dropped whatever paid work we had and focused the rest of our junior year working to bring Mod of Cards to life– making certain that each new episode was more technically sound and artistically ambitious than what came before.
+
The rest was history
+
+
+
+
There's not much more to say about what followed. The core crew of about five people spent about 30-50 hours per week from October 2014 to June 2015 making Mod of Cards. My day became pretty routine. Swim paractice, class then shoot and edit. I was a ballboy at the Celtics still somehow. The year flew by and in the end we had produced the first ever drama series on a college campus. I've been fortunate enough to work on some pretty cool projects at Boston College like Joycestick, but nothing will ever compare to Mod of Cards. The process of seeing anything– software, a movie, a TV show or even business– come together is thing of beauty. Especially when everyone you're working with is as passionate about the product of the work as you are.
+
+
+
+
The late nights that became early mornings on the set of Mod of Cards are among my favorite memories of college. Spring semester 2015 I wound up with a 1.9 GPA and got two D's. I eventually had to leave the swim team to keep Mod of Cards from derailing, one of the toughest decisions I've made. Anyway, all six episodes of Mod of Cards got completed and there are few things I'm more proud of than it.
After the success of the McMullen Museum of Art Virtual Reality experience, Boston College Ireland commissioned me to build a similar experience for visitors to Dublin for the 2016 ACC kickoff football game between Georgia Tech and Boston College. To push myself to make the experience more impressive than the McMullen's, I decided to shoot and stitch 3D, 360 degree panoramas instead of opting for the same monoscopic flat panorama I had done previously.
-
From the start, I knew that working in 3D would be pose quite a challenge. Due to parallax points associated with stitching spheres, the geometry of rotating stereo viewpoints and numerous other issues, getting this entire project in working order would be no small task. Meanwhile (even as recently as 2015), 360 stereo for live video/photo capture was a widely unsolved problem. And those who did have it solved such as Facebook weren't about to release their IP. Aside from capture, stitching and bringing the two channels of images into the graphics engine would cause additional concerns. I found some solid research on the web and began to prioritize. As each piece began to come together, the project as a whole seemed more feasible.
-
-
-
The camera Rig
-
-
I started with the camera rig. Without the assets for the content, this project was obviously headed nowhere fast. As mentioned, 3D/360 camera rigs such as Vuze didn't exist when was building this project so I had to create my own solution. I picked up a pair of Sony a6000 mirorrless camers with matching fisheye lenses as these would be the two stero viewpoints.
-
-
Once I had the two cameras, I needed a way to connect them together by the hot shoe mount to stay together so that their lenses would be together. No consumer products for this sort of thing existed so I decided to 3D-print a part to match the specifications I needs. I measured the cameras down to the millimeter and learned how to build models in Autodesk 123D Design. Iterations of the plastic mount had to accompany for structural integrity of the part (the camera/lens combo is heavy) as well as IPD angles and distances. Shown are different iterations of the part (I used the serrating on the part's edges to get accurate measurements between tests). After countless broken parts and images that wouldn't stitch even remotely correctly, I finally found a solid working model to use as I shot the entirety of the panoramas.
-
-
-
Shooting and Stitching
-
-
-
-
Shooting with the cameras was a relatively painless part of the process. Given that the cameras are positioned upright during shooting, I had to take six shots around to get coverage for a full panorama. Given how wide the lenses were, the nadir and zenith were picked up well enough by the cameras without having to rotate the rig up or down at all. I was certain to use the shoot/de;ay function in camera so that for longer exposures, there would be no vibration to damage image quality.
-
Once all of the images were shot in the ten different locations around BC's campus, I brought all of the images into PTGui for stitching and batch processing. Using the viewpoint correction tool in PTGui, I was able to account decently well for the fact that each individually was slightly off center during shooting and would lead to inheret stitching errors. Where the viewpoint correction didn't find mathcing features, I brought the entire .psd file PTGui creates into Photoshop and manually skewed and shifted images to stitch them properly. The last step in PTGui I needed to carry out was to convert the image spheres into a cubic (six square images) format for quick and easy processing. When all was said and done here, I had these six squares for each of the individual panoramas I shot. Keeping files organized and seprate from their L/R side equivalents was vital as I now had 120 images to keep track of. All that was left to do was bring them into a graphics engine to be able to bootstrap the panoramas along with the Oculus SDK for interactive stereo viewing.
-
-
-
Stereo Cubemaps for VR
-
-
Shooting with the cameras was a relatively painless part of the process. Given that the cameras are positioned upright during shooting, I had to take six shots around to get coverage for a full panorama. Given how wide the lenses were, the nadir and zenith were picked up well enough by the cameras without having to rotate the rig up or down at all.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/projects/vr/bcinvr.html b/projects/vr/bcinvr.html
index 2b81a9d..a449070 100644
--- a/projects/vr/bcinvr.html
+++ b/projects/vr/bcinvr.html
@@ -125,13 +125,13 @@
Boston College in Virtual Reality
After the success of the McMullen Museum of Art Virtual Reality experience, Boston College Ireland commissioned me to build a similar experience for visitors to Dublin for the 2016 ACC kickoff football game between Georgia Tech and Boston College. To push myself to make the experience more impressive than the McMullen's, I decided to shoot and stitch 3D, 360 degree panoramas instead of opting for the same monoscopic flat panorama I had done previously.
-
From the start, I knew that working in 3D would be pose quite a challenge. Due to parallax points associated with stitching spheres, the geometry of rotating stereo viewpoints and numerous other issues, getting this entire project in working order would be no small task. Meanwhile (even as recently as 2015), 360 stereo for live video/photo capture was a widely unsolved problem. And those who did have it solved such as Facebook weren't about to release their IP. Aside from capture, stitching and bringing the two channels of images into the graphics engine would cause additional concerns. I found some solid research on the web and began to prioritize. As each piece began to come together, the project as a whole seemed more feasible.
+
From the start, I knew that working in 3D would be quite the challenge. Due to parallax points associated with stitching spheres, the geometry of rotating stereo viewpoints and numerous other issues, getting this entire project in working order would be no small task. Meanwhile (even as recently as 2015), 360 stereo for live video/photo capture was a widely unsolved problem. And those who did have it solved such as Facebook weren't about to release their IP. Aside from capture, bringing the two channels of images into the graphics engine would cause additional concerns. I found some solid research on the web and began to prioritize. As each piece began to come together, the project as a whole seemed more feasible.
The camera Rig
-
I started with the camera rig. Without the assets for the content, this project was obviously headed nowhere fast. As mentioned, 3D/360 camera rigs such as Vuze didn't exist when was building this project so I had to create my own solution. I picked up a pair of Sony a6000 mirorrless camers with matching fisheye lenses as these would be the two stero viewpoints.
-
+
I started with the camera rig. Without the assets for the content, this project was obviously headed nowhere fast. As mentioned, 3D/360 camera rigs such as Vuze didn't exist when I was building this project so I had to create my own solution. I picked up a pair of Sony a6000 mirorrless camers with matching fisheye lenses as these would be the two stero viewpoints.
+
Once I had the two cameras, I needed a way to connect them together by the hot shoe mount to stay together so that their lenses would be together. No consumer products for this sort of thing existed so I decided to 3D-print a part to match the specifications I needs. I measured the cameras down to the millimeter and learned how to build models in Autodesk 123D Design. Iterations of the plastic mount had to accompany for structural integrity of the part (the camera/lens combo is heavy) as well as IPD angles and distances. Shown are different iterations of the part (I used the serrating on the part's edges to get accurate measurements between tests). After countless broken parts and images that wouldn't stitch even remotely correctly, I finally found a solid working model to use as I shot the entirety of the panoramas.
Donec eget ex magna. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fergiat. Pellentesque in mi eu massa lacinia malesuada et a elit. Donec urna ex, lacinia in purus ac, pretium pulvinar mauris. Curabitur sapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique.
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis dapibus rutrum facilisis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Etiam tristique libero eu nibh porttitor fermentum. Nullam venenatis erat id vehicula viverra. Nunc ultrices eros ut ultricies condimentum. Mauris risus lacus, blandit sit amet venenatis non, bibendum vitae dolor. Nunc lorem mauris, fringilla in aliquam at, euismod in lectus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. In non lorem sit amet elit placerat maximus. Pellentesque aliquam maximus risus, vel venenatis mauris vehicula hendrerit.
-
Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque venenatis dolor imperdiet dolor mattis sagittis. Praesent rutrum sem diam, vitae egestas enim auctor sit amet. Pellentesque leo mauris, consectetur id ipsum sit amet, fersapien risus, commodo eget turpis at, elementum convallis elit. Pellentesque enim turpis, hendrerit tristique lorem ipsum dolor.
+
For the Spring 2015 exhibit at the McMullen Museum of Art on campus at Boston College my research advisor Prof. Nugent asked me to look at innovative ways to bring digital humanitites into the mix. The art exhibit was centered around the Irish Arts and Crafts movement for the 100th anniversary of Irish Independence. I spitballed a few ideas, but the suggestion of using Virtual Reality to allow users to visist the areas around where the pieces of art they were looking at came from was the one we were most excited about. In the end, I ended up shooting stitching and building my first complete project in VR. I budgeted out costs for an Oculus DK2, a new PC and some rental camera equipment with which I travelled thoughout the Irish countryside twice shooting panoramas.
+
+
+
Next I stitched everything together in PTGui and Photoshop and brough each panorama into Unity as skyboxes in seprate scenes. I wrote a script to change the scene and wired up some buttons to the sides of the Oculus that sat in 3D printed housings to let users adjust what they were seeing.
+
+
+ Controlbutton sitting in a 3D printed mount.
+
+
Although the scope of this project was limited, it was exciting to watch users inside of the experience– it was the first VR experience for many. Given that the VR installation sat inside of an art museum, the majority of the users were over the age of 60. The project didn't need to be done in 3D, or offer any gaze-based interactivity. The medium was so foreign to them, that just instructing them to move their heads once they put the headset on was enough. Regardless of how simple this expereince was however, it proved to me the power of Virtual Reality as a tool for storytelling.
The proccess of building a game (especially for a platform as new and undefined as Virtual Reality– we're developing for HTC Vive) is not a smooth one. Hiccups along the way are to be expected, and with Joycestick, we managed roadblocks exceptionally well. Working with a team of students who are enrolled in multiple other [hard] classes in addition to balancing social lives, clubs and sports is inherently difficult. We managed to make it work by scheduling small deadlines, delegating tasks and staying on top of our Trello boards. That being said, as hard deadlines rolled around, little sleep was had.
But whether shipping an ambitious project in Virtual Reality with CS students with no work/game engine experience, or a feature for a piece of enterprise software with an experienced team of developers, similar issues arise. Even in its MVP state, I was extremely proud of the work that I saw done on Joycestick and look forward to what work continues to be done with it. In the meantime, I will be working on porting a lite version of Joycestick to mobile for Google Daydream.