Skip to content

Commit

Permalink
docs: update fe about page
Browse files Browse the repository at this point in the history
  • Loading branch information
nicovandenhooff committed Apr 22, 2022
1 parent 5e902ab commit 00c7d8e
Showing 1 changed file with 67 additions and 12 deletions.
79 changes: 67 additions & 12 deletions client/src/pages/about/About.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import React from 'react'
import { Container, Box, Typography } from '@mui/material';
import { List, ListItem, ListItemText, Container, Box, Typography } from '@mui/material';


export const About = () => {
Expand All @@ -16,44 +16,99 @@ export const About = () => {
padding: '40px 40px 20px',
}}>
<Typography
variant="h5"
variant="h4"
component="p"
sx={{ mb: 1 }}
>
About
Application Description
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
Indoor Scene Detector was built and is maintained by Nico Van den Hooff and Melissa Liow.
Indoor Scene Detector is a full stack computer vision application built with PyTorch, Captum, Flask, React, Docker, Heroku and GitHub Pages.
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
Indoor Scene Detector is capable of classifying images of an indoor scene, such as a bedroom or a kitchen. Currently, Indoor Scene Detector includes support for ten categories of scenes: airport, bakery, bar, bedroom, kitchen, living room, pantry, restaurant, subway, and warehouse. Support for more classes is currently under development.
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
In order to classify a scene, there are four convolutional neural networks available. These include tuned versions of AlexNet, ResNet, or DenseNet, in addition to a simple "vanilla" CNN that has no transfer learning applied to it. If AlexNet, ResNet or DenseNet are used, Indoor Scene Detector demonstrates the power of transfer learning in computer vision, as the tuned versions of these networks should obtain a much higher accuracy in predictions when compared to the simple CNN with no transfer learning.
</Typography>
<Typography
variant="h4"
component="p"
sx={{ mb: 1 }}
>
How to Use Indoor Scene Detector
</Typography>
</Box>
<Box sx={{
padding: '40px 40px 20px',
maxWidth: '1000px'
}}>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
<List>
<ListItem disablePadding>
<ListItemText primary="1. Select one of the preloaded images or upload your own to classify." />
</ListItem>
<ListItem disablePadding>
<ListItemText primary="2. Select the convolutional neural network you would like to use to classify the image." />
</ListItem>
<ListItem disablePadding>
<ListItemText primary="3. Press submit and your image will be classified." />
</ListItem>
</List>
</Typography>
<Typography
variant="h4"
component="p"
sx={{ mb: 1 }}
>
About
Model Outputs
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
Indoor Scene Detector can be used to classify images of an indoor scene, for example a bedroom or a kitchen. Further, Indoor Scene Detector contains four different convolutional neural networks that can be used to classify an image. Specifically, tuned versions of AlexNet, ResNet, and DenseNet are available for use, in addition to a custom "vanilla" CNN that has no transfer learning. If AlexNet, ResNet or DenseNet are used, a user of the application can see the power of transfer learning in computer vision, as tuned versions of these networks obtain a much higher accuracy in predictions relative to the simple network with no transfer learning.
Each CNN will output the top three predictions for an image ranked by probability in descending order. In addition, a heatmap of the images Saliency attributes is plotted. Saliency is an algorithm that attempts to explain the predictions a CNN makes by calculating the gradient of the output with respect to the input. The absolute value of Saliency attributes can be taken to represent feature importance.
To learn more, please see the <a href="https://arxiv.org/pdf/1312.6034.pdf">original paper</a>, or the <a href="https://captum.ai/docs/algorithms#saliency">Captum</a> documentation.
</Typography>
<Typography
variant="h4"
component="p"
sx={{ mb: 1 }}
>
Source Code
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
The source code for Indoor Scene Detector is hosted in this <a href="https://github.com/nicovandenhooff/cnn-dashboard">GitHub repository</a>.
</Typography>
<Typography
variant="h4"
component="p"
sx={{ mb: 1 }}
>
Attributions
</Typography>
<Typography
variant="body"
component="p"
sx={{ mb: 2 }}
>
More text here
The data set used in building Indoor Scene Detector was the <a href="https://web.mit.edu/torralba/www/indoor.html">Indoor Scene Recognition</a> data set collected by MIT.
</Typography>
</Box>
</Container >
Expand Down

0 comments on commit 00c7d8e

Please sign in to comment.