diff --git a/.gitignore b/.gitignore
new file mode 100755
index 0000000..74b7cc7
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,10 @@
+.Rproj.user/
+*Rproj
+.Rhistory
+resume_files/
+.secrets
+.DS_Store
+README.html
+.Rproj.user
+.Rdata
+.httr-oauth
diff --git a/README.md b/README.md
new file mode 100755
index 0000000..0a8168b
--- /dev/null
+++ b/README.md
@@ -0,0 +1,40 @@
+## My pagedown rendered CV
+
+__Switch to googlesheets__
+
+As I get older and more crotchety I find it more and more difficult to manually update a CSV. In response to this, I have moved the data-storing mechanism from a plain CSV to google sheets using the wonderful [`googlesheets4` package.](https://googlesheets4.tidyverse.org/index.html) This allows for a much more easy updating system and also makes it easy to store all the other info that didn't feel write to put into a CSV before (like the intro and aside text) right with everything as separate pages/sheets within the main sheet.
+
+I have attempted to keep the whole thing as easy as possible to understand and modify by using a publically available sheet and preserving the old CSV driven way behind a boolean variable that can be set in the setup chunk.
+
+
+## Structure
+
+This repo contains the source-code and results of my CV built with the [pagedown package](https://pagedown.rbind.io) and a modified version of the 'resume' template.
+
+The main files are:
+
+- `index.Rmd`: Source template for the cv, contains a variable `PDF_EXPORT` in the header that changes styles for pdf vs html.
+ - `index.html`: The final output of the template when the header variable `PDF_EXPORT` is set to `FALSE`. View it at [nickstrayer.me/cv](http://nickstrayer.me/cv).
+ - `strayer_cv.pdf`: The final exported pdf as rendered by Chrome on my mac laptop. Links are put in footer and notes about online version are added.
+- `resume.Rmd`: Source template for single page resume.
+ - `resume.html`/`strayer_resume.pdf`: Result for single page resume.
+- `parsing_functions.R`: A series of small functions for parsing a position entry into the proper HTML format. Includes logic for removing links if needed etc..
+- `gather_data.R`: Loads the data that makes up the body of both the CV and resume. Either pulls from a specified google sheet with info or multiple csvs. (Examples of both are provided in repo.)
+- `csvs/*.csv`: A series of CSVs containing the information CV and resume. Included as examples if the non-googlesheets method of storing data is prefered.
+- `css/`: Directory containing the custom CSS files used to tweak the default 'resume' format from pagedown.
+
+## Want to use this to build your own CV/resume?
+
+1. Fork, clone, download the zip of this repo to your machine with RStudio.
+2. Make a copy of my [info-holding google sheet](https://docs.google.com/spreadsheets/d/14MQICF2F8-vf8CKPF1m4lyGKO6_thG-4aSwat1e2TWc/edit#gid=1730172225) and fill in your personal info for all the sheets (`positions`, `language_skills`, `text_blocks`, and `contact_info`).
+ a. If you want to use CSV's instead of google sheets, update the contents of the CSVs stored in the `csvs/` folder.
+3. Go through and personalize the supplementary text in the Rmd you desire (`index.Rmd` for CV, `resume.Rmd` for resume).
+4. Print each unique `section` (as encoded in the `section` column of `positions.csv`) in your `.Rmd` with the command `position_data %>% print_section('education')`.
+5. Get the PDF out by viewing in your browser and then doing `control/command + P` and selecting "print to pdf". Alternatively use `pagedown::chrome_print()` or `knit: pagedown::chrome_print` in RMD header. See [pagedown docs on printing](https://pagedown.rbind.io/#print-to-pdf) for more details.
+6. Let the world know how awesome you are! (Also send me a tweet/email if you desired and I will broadcast your version of the CV on this repo and or twitter.)
+
+## Looking for the old version with just a single CSV?
+
+The [blog post I originally wrote about this process](https://livefreeordichotomize.com/2019/09/04/building_a_data_driven_cv_with_r/) used an older version of this document. I think that the new googlesheets method is easier to maintain and extend, however the old version is alive and well [here.](https://github.com/nstrayer/cv/releases/tag/1.0)
+
+
diff --git a/css/custom_resume.css b/css/custom_resume.css
new file mode 100755
index 0000000..2c7af69
--- /dev/null
+++ b/css/custom_resume.css
@@ -0,0 +1,66 @@
+
+* {
+ /* Override default right margin for sidebar*/
+ --pagedjs-margin-right: 0.2in !important;
+ --pagedjs-margin-left: 0;
+ --pagedjs-margin-top: 0.2in;
+ --pagedjs-margin-bottom: 0.2in;
+}
+
+[data-id="main"] {
+ padding: 0 0.05in 0 0.05in;
+}
+
+/*[data-id="title"] {
+ margin: 0 0.5in 0.08in -0.5in;
+}*/
+
+.main-block {
+ margin-top: 0;
+ margin-left: -0.1in;
+}
+
+/* Customize some of the sizing variables */
+:root {
+ --sidebar-width: 12rem; /* Shrink sidebar width */
+ --sidebar-horizontal-padding: 0.02in; /* Reduce sidebar padding */
+}
+
+[data-id="main"]{
+ width: calc(var(--main-width) + 0.2in);
+}
+
+[data-id="subtitle"]{
+ width: calc(var(--main-width) + 0.2in);
+}
+/*
+.aside {
+ height: 100%;
+ width: var(--sidebar-width) !important;
+ padding-left: 0.75rem;
+ padding-right: 0;
+ padding-top: 0rem;
+}*/
+
+.aside .level2 {
+ margin-top: 0.315in;
+}
+
+.aside p {
+ margin-block-start: 0.75em;
+ margin-block-end: 0;
+}
+
+[data-id="skills"] ul {
+ margin: 0.05in 0 0.05in;
+}
+
+[data-id="contact"] ul {
+ padding-left: 0 !important;
+ margin-top: 0.75rem;
+}
+
+[data-id="disclaimer"] p {
+ margin-block-start: 0.15em;
+ margin-block-end: 0.15em;
+}
diff --git a/css/styles.css b/css/styles.css
new file mode 100755
index 0000000..d1f9731
--- /dev/null
+++ b/css/styles.css
@@ -0,0 +1,302 @@
+@import url('https://fonts.googleapis.com/css2?family=Sen:ital,wght@0,300;0,400;0,500;1,300;1,400&display=swap');
+@import url('https://fonts.googleapis.com/css2?family=Zilla+Slab:ital,wght@0,300;1,300&display=swap');
+
+/* Customize some of the variables */
+:root {
+ --pale-background-color: #EDF6F9;
+ --sidebar-width: 11rem; /* Shrink sidebar width */
+ --sidebar-background-color: var(--pale-background-color); /* Sidebar color*/
+ --sidebar-horizontal-padding: 0.01in; /* Reduce sidebar padding */
+ --decorator-outer-dim: 10px; /* Make position delineating circles larger */
+ --decorator-border: 2px solid #ACD7D4; /* Timeline line color*/
+}
+
+/* Main text is Sen*/
+body {
+ font-family: "Sen", sans-serif;
+ font-weight: 300;
+ line-height: 1.3;
+ color: #444;
+}
+
+strong {
+ font-weight:500;
+}
+
+img {
+ border-radius: 50%;
+ margin: 0 auto;
+ display: block;
+ padding: 0.5rem;
+ margin-bottom: 1rem;
+ border: 1pt #83C5BE solid;
+}
+
+[data-id="main"] {
+ padding-left: 1rem;
+ padding-right: 2.4rem;
+}
+
+/* Give headers Zilla Slab font */
+
+.header-block{
+ background-color: var(--pale-background-color);
+ /*border-bottom: 1pt #dedede solid;*/
+ width: var(--pagedjs-width);
+ height: 150px;
+ margin-top: calc(-1*var(--pagedjs-margin-top));
+ margin-left: -19px;
+ display: flex;
+ align-items: center;
+ justify-content: center;
+}
+
+.header-block-inner {
+ /*background-color: var(--pale-background-color);*/
+ width: 100%;
+ height: 150px;
+ display: flex;
+ align-items: center;
+ /*justify-content: center;*/
+ /* border-top: 1px solid #add8e65e; */
+}
+
+div.title {
+ font-family: "Zilla Slab", serif;
+ text-align: center;
+ padding: 3rem;
+ font-size: 3.5rem!important;
+ line-height: 1;
+ display: block!important;
+ color: #006D77;
+ max-width: 80%;
+}
+
+.item {
+ text-transform: uppercase;
+ display: block;
+}
+
+h1,
+h2 {
+ font-family: "Zilla Slab", serif;
+ color: #000;
+}
+
+h1{
+ text-transform: none;
+ color: silver;
+ font-weight: normal;
+ display: none;
+}
+
+h2 {
+ letter-spacing: 1pt;
+}
+.aside h2,
+#aside h2 {
+ color: #83C5BE;
+}
+
+h3 {
+ font-family: "Sen", sans-serif;
+ text-transform: uppercase;
+ letter-spacing: 1pt;
+ font-weight: 500;
+}
+
+/* When in PDF export mode make sure superscripts are nice and small and italic */
+sup {
+ font-size: 0.45rem;
+ font-style: italic;
+}
+
+/* Avoid the breaking within a section */
+.blocks {
+ break-inside: avoid;
+}
+
+* {
+ /* Override default right margin for sidebar*/
+ --pagedjs-margin-right: 0.2in;
+ --pagedjs-margin-left: 0.2in;
+}
+
+/* sidebar left border */
+
+.pagedjs_sheet::before {
+ content: "";
+ display: block;
+ position: absolute;
+ z-index: 0;
+ left: 14.5rem;
+ top: 0;
+ bottom: 0;
+ width: 1px;
+ /*background-color: #f0f7fa;*/
+ height: 100%;
+}
+
+.pagedjs_sheet::after,
+.pagedjs_first_page .pagedjs_sheet::after {
+ content: "";
+ display: block;
+ position: absolute;
+ z-index: 0;
+ left: 0;
+ width: 14rem;
+ /*background-color: #f0f7fa;*/
+}
+
+.pagedjs_first_page .pagedjs_sheet::after {
+ height: 75.9%;
+ top: 248.1px;
+}
+
+.pagedjs_sheet::after {
+ height: 98.7%;
+ top: 7px;
+}
+
+
+a {
+ color: #e43b07;
+}
+
+a:hover {
+ text-decoration: underline;
+}
+
+.section.level1.aside {
+ height: 74.9%;
+ padding-top: 1.6rem;
+}
+
+
+[data-id="skills"] {
+ line-height: 1.2;
+}
+
+[data-id="disclaimer"] {
+ position: static;
+ text-align: left;
+ opacity: 90%;
+}
+
+
+.aside {
+ width: var(--sidebar-width);
+ padding: 0.9in 5px 0.9px 10px;
+ font-size: 0.8rem;
+ float: right;
+ position: absolute;
+ /*left: 0;*/
+}
+
+[data-id="subtitle"] {
+ width: var(--main-width);
+ padding: 0 0.25in 0 0.25in;
+ font-size: 0.8rem;
+ float: left;
+}
+
+[data-id="main"] {
+ width: var(--main-width);
+ padding: 0 0.25in 0 0.25in;
+ font-size: 0.7rem;
+ float: left;
+}
+
+#aside,
+.aside {
+ z-index: 3;
+ color: #004346;
+}
+
+#aside a,
+#aside p,
+#aside ul li{
+ color: #004346;
+}
+
+.details .place {
+ margin-top: 0.25rem;
+}
+
+.main-block:not(.concise) .details div {
+ padding-top: 0.005rem;
+}
+
+/* Laptop icon isn't centered by default which is lame */
+.fa-laptop {
+ margin-left: -3px;
+}
+
+/* When we have links at bottom in a list make sure they actually are numbered */
+#links li {
+ list-style-type: decimal;
+}
+
+/* Dont put the little fake list point in front of links */
+.aside li::before {
+ display: none;
+}
+
+/* Move closer to start and up towards header */
+.aside ul {
+ padding-left: 0rem;
+}
+
+.aside li::before {
+ position: relative;
+ margin-left: -4.25pt;
+ content: "• ";
+}
+
+/* Make sure elements in aside are centered and have a nice small text */
+.aside {
+ width: calc(var(--sidebar-width) + 9px);
+ line-height: 1.2;
+ font-size: 0.75rem;
+}
+
+/* Make little circle outline */
+.decorator::after {
+ background-color: #83C5BE;
+
+}
+
+/* Remove the fake bullets from lists */
+.aside li::before {
+ content: auto;
+}
+
+.skill-bar {
+ color: white;
+ padding: 0.1rem 0.25rem;
+ margin-top: 3px;
+ position: relative;
+ width: 100%;
+}
+
+
+/* When the class no-timeline is added we remove the after psuedo element from the header... */
+
+/* Removes the psuedo element on h2 tags for this section */
+.section.no-timeline h2::after {
+ content: none;
+}
+
+/* Without adding padding the content is all up on the title */
+.section.no-timeline h2 {
+ padding-bottom: 1rem;
+}
+
+/* Add styles for little cards */
+.info-card{
+ width: 220px;
+ float: left;
+ padding: 0.5rem;
+ margin: 0.5rem;
+ box-shadow: 1px 1px 4px black;
+}
diff --git a/cv.pdf b/cv.pdf
new file mode 100755
index 0000000..a7e3998
Binary files /dev/null and b/cv.pdf differ
diff --git a/gather_data.R b/gather_data.R
new file mode 100755
index 0000000..6e6175f
--- /dev/null
+++ b/gather_data.R
@@ -0,0 +1,36 @@
+# =================================================================================
+# This code uses google sheets to store the position info
+if(using_googlesheets){
+
+ library(googlesheets4)
+
+ if(sheet_is_publicly_readable){
+ # This tells google sheets to not try and authenticate. Note that this will only
+ # work if your sheet has sharing set to "anyone with link can view"
+ sheets_deauth()
+ } else {
+ # My info is in a public sheet so there's no need to do authentication but if you want
+ # to use a private sheet, then this is the way you need to do it.
+ # designate project-specific cache so we can render Rmd without problems
+ options(gargle_oauth_cache = ".secrets")
+
+ # Need to run this once before knitting to cache an authentication token
+ # googlesheets4::sheets_auth()
+ }
+
+
+ position_data <- read_sheet(positions_sheet_loc, sheet = "positions")
+ skills <- read_sheet(positions_sheet_loc, sheet = "language_skills")
+ text_blocks <- read_sheet(positions_sheet_loc, sheet = "text_blocks")
+ contact_info <- read_sheet(positions_sheet_loc, sheet = "contact_info", skip = 1)
+
+} else {
+
+ # Want to go oldschool with just a csv?
+ position_data <- read_csv("csvs/positions.csv")
+ skills <- read_csv("csvs/language_skills.csv")
+ text_blocks <- read_csv("csvs/text_blocks.csv")
+ contact_info <- read_csv("csvs/contact_info.csv", skip = 1)
+
+}
+
diff --git a/index.Rmd b/index.Rmd
new file mode 100755
index 0000000..a52f16b
--- /dev/null
+++ b/index.Rmd
@@ -0,0 +1,257 @@
+---
+title: "Curriculum Vitae"
+author: Alison Presmanes Hill
+date: "`r Sys.Date()`"
+output:
+ pagedown::html_resume:
+ css: ['css/styles.css', 'resume']
+ # set it to true for a self-contained HTML page but it'll take longer to render
+ self_contained: true
+---
+
+```{r, include=FALSE}
+knitr::opts_chunk$set(
+ results='asis',
+ echo = FALSE
+)
+
+library(glue)
+library(tidyverse)
+
+# ======================================================================
+# These variables determine how the the data is loaded and how the exports are
+# done.
+
+# Is data stored in google sheets? If no data will be gather from the csvs/
+# folder in project
+using_googlesheets <- TRUE
+
+# Just the copied URL from the sheet
+positions_sheet_loc <- "https://docs.google.com/spreadsheets/d/11owj8kHEnOpShQZNpNzrZRzQAvdhRv_NkVy46J1u2bQ/edit?usp=sharing"
+
+# Is this sheet available for anyone to read? If you're using a private sheet
+# set this to false and go to gather_data.R and run the data loading manually
+# once to cache authentication
+sheet_is_publicly_readable <- TRUE
+
+# Is the goal of this knit to build a document that is exported to PDF? If so
+# set this to true to have links turned into footnotes at the end of the
+# document
+PDF_EXPORT <- FALSE
+
+
+CV_PDF_LOC <- "github.com/dcossyleon/cv/raw/master/cv.pdf"
+CV_HTML_LOC <- "dcossyleon.github.io/cv/"
+
+
+# A global (gasp) variable that holds all the links that were inserted for
+# placement at the end
+links <- c()
+
+# ======================================================================
+# Now we source two external scripts. One contains functions for building the
+# text output and the other loads up our data from either googlesheets or csvs
+
+# Functions for building sections from CSV data
+source('parsing_functions.R')
+
+# Load data for CV/Resume
+source('gather_data.R')
+```
+
+```{r}
+# When in export mode the little dots are unaligned, so fix that.
+if(PDF_EXPORT){
+ cat("
+ ")
+}
+```
+
+::: {.header-block}
+::: {.header-block-inner}
+::: {.title}
+`r rmarkdown::metadata$author`
+:::
+:::
+:::
+
+::: {#subtitle .subtitle}
+```{r intro}
+ print_text_block(text_blocks, 'intro')
+```
+:::
+
+# Aside
+
+![logo](logo.jpg){width="90%"}
+
+```{r}
+# When in export mode the little dots are unaligned, so fix that.
+if(PDF_EXPORT){
+ glue("View this CV online with links at _{CV_HTML_LOC}_")
+} else {
+ glue("[ Download CV as a PDF]({CV_PDF_LOC})")
+}
+```
+
+## Contact {#contact}
+
+```{r}
+contact_info %>%
+ glue_data("- {contact}")
+```
+
+## Skills {#skills}
+
+
+
+R Markdown
+
+Statistics & Machine Learning
+
+
+
+HTML/CSS
+
+
+
+Git
+
+## Disclaimer {#disclaimer}
+
+Source code available: [github.com/apreshill/cv](https://github.com/apreshill/cv).
+
+Updated: `r Sys.Date()`.
+
+# Main
+
+## hello {#title}
+
+## Experience {data-icon="suitcase" data-concise="true"}
+
+```{r}
+print_section(position_data, 'experience')
+```
+
+## Education {data-icon="graduation-cap" data-concise="true"}
+
+```{r}
+print_section(position_data, 'education')
+```
+
+## Honors & Awards {data-icon="trophy"}
+
+```{r}
+print_section(position_data, 'honors_awards')
+```
+
+## Research & Data Science Experience {data-icon="laptop"}
+
+::: {.aside}
+```{r}
+print_text_block(text_blocks, 'data_aside')
+```
+:::
+
+```{r}
+print_section(position_data, 'research_data')
+```
+
+
I am a former research scientist turned data scientist, with over 15 years of experience using data science to understand how people learn. I am also an international keynote speaker, author, team leader, and award-winning educator. I research, teach, and design software tools to improve the user experience for beginners, advanced users, and everyone in between.
Distinguished Faculty Awards Nominee for Outstanding Teaching
+
Oregon Health & Science University
+
Portland, OR
+
2018
+
+
+
Invited Participant
+
ROpenSci Unconference
+
N/A
+
2018
+
+
A yearly invite-only event that brings together scientists, developers, and open data enthusiasts from academia, industry, government, and non-profits to make an impact on open science.
+
+
+
+
School of Medicine Faculty Compact
+
Oregon Health & Science University
+
N/A
+
2017
+
+
Awarded to researchers with a consistent track-record of external grant-funding.
+
+
+
+
Paths to Leadership Fellow
+
Oregon Health & Science University
+
N/A
+
2016
+
+
A six-month program for a small group of individuals identified as emerging faculty leaders.
+
+
+
+
Catalyst Award
+
Oregon Clinical & Translational Research Institute, Oregon Health & Science University
+
N/A
+
2015
+
+
+
Sanford Fellow
+
School of Social & Family Dynamics, Arizona State University
+
N/A
+
2010 - 2008
+
+
+
Julius Seeman Award
+
Department of Psychology & Human Development, Vanderbilt University
+
N/A
+
2008
+
+
Awarded to the graduate student who exemplifies the department’s ideals of scholastic, personal and professional achievement.
+
+
+
+
Graduate Honors Fellowship
+
Vanderbilt University
+
N/A
+
2008 - 2003
+
+
+
Susan Gray Award
+
Department of Psychology & Human Development, Vanderbilt University
+
N/A
+
2007
+
+
For excellence in scholarly writing to the graduate student who is sole or first author with the most distinguished scholarly publication during the previous year.
+
+
+
+
Graduate Travel Grant
+
Vanderbilt University
+
N/A
+
2007
+
+
+
David Zeaman Award
+
Gatlinburg Conference on Research & Theory in Intellectual & Developmental Disabilities
+
N/A
+
2006
+
+
+
Ruth L. Kirschstein National Research Service Award Pre-Doctoral Fellow
+
NIH-NICHD
+
N/A
+
2006 - 2003
+
+
T32 HD07226: Research Behavioral Training in Developmental Disabilities.
+
+
+
+
Peabody College Travel Grant
+
Vanderbilt University
+
N/A
+
2005
+
+
+
Outstanding Undergraduate Award
+
Department of Psychology, Georgia Institute of Technology
+
N/A
+
2002
+
+
+
+
Research & Data Science Experience
+
+
I also write about my data science projects on my blog desiree.rbind.io.
Co-delivered international webinar on quickly building R Markdown websites and hosting online.
+
+
+
+
The Wonderful World of R Markdown
+
Online workshop
+
Marcus Autism Center Coding Club
+
2020
+
+
Taught an introduction to R Markdown and hosting HTML pages on Netlify.
+
+
+
+
Introduction to Machine Learning with the Tidyverse
+
rstudio::conf 2020
+
San Francisco, CA
+
2020
+
+
Served as a TA for a two day workshop on introducing users to tidymodels, tidyverse packages for doing machine learning.
+
+
+
+
Undergraduate mentor
+
Young Lab
+
Atlanta, GA
+
2019 - 2018
+
+
Supervised undergraduate for honors thesis research projects
+
+
+
+
Atlanta Chapter for Society for Neuroscience
+
Graduate representative
+
Emory University
+
2017 - 2015
+
+
Coordinated a program to pair neuroscientists from 3 universities to 60 local K-12 schools for outreach visits, reaching over 5,000 students, and increasing rate of successful pairs by 22% from previous year.
+
Planned series of hands-on science demos at the Atlanta Science Festival, presenting to 1000+ public visitors.
+
+
+
+
Graduates in Neuroscience Curriculum Committee
+
Elected representative
+
Emory University
+
2017 - 2015
+
+
Redesigned course evaluations, increasing student response rate from 53% to 90% and increasing the proportion of descriptive feedback for each lecturer by a factor of 4.
+
Implemented online system that improved ease of access and timeliness for 40+ instructors to receive their evaluations.
+
Introduced teaching technique called “Checks for Understanding” and provided guidance to over 40 lecturers on its implementation, which 93% of students rated as having had a positive and helpful effect on first year neuroscience courses.
+
+
+
+
Atlanta Brain Bee
+
Chair
+
Atlanta, GA
+
2017 - 2015
+
+
Led neuroscience Q&A competition for teens.
+
Managed 4 coordinators and 49 volunteers to hold the largest Atlanta Brain Bee, including pre-Bee workshops.
+
Designed and led trainings for local winner to prepare for nationals, ultimately placing 3rd at nationals (2016) and top 20 (2017)
+
+
+
+
Teach For America/ YES Prep Public Schools
+
Founding 7th Grade Science Teacher
+
Houston, TX
+
2013 - 2011
+
+
Selected as one of 5,066 from 47,900 applicants to work in national service corps and teach in under-resourced communities.
+
Analyzed achievement data regularly for 145 students to inform instruction and create personalized goals for all students.
+
Increased average student achievement in 7th grade science by 12% from the previous year.
Talk about how to use narrative, design, and interactivity within R Markdown materials to enhance online content for learners.
+
+
+
+
How to avoid pseudoreplication in experimental design and analysis
+
Methods Aside: Institute for Quantitative Theory and Methods,
+Data Science for Scientists ATL & Graduate Research Interdisciplinary Team of Scholars
+
Emory University
+
2019
+
+
Focused on statistical concepts and presented to an interdisciplinary audience
+
+
+
+
Pseudoreplication: How to avoid it in experimental design and analysis
+
Data Science for Scientists ATL
+
Atlanta, GA
+
2018
+
+
Co-developed with Hasse Walum.
+
+
+
+
Calculating heritability of rhesus’ social traits using the animal model
+
Neuroscience Work-in-Progress (WIP) Seminar
+
Yerkes National Primate Research Center
+
2018
+
+
Demonstrated Bayesian methods for estimating heritability in a large group of rhesus macaques
+
Inter-departmental seminar
+
+
+
+
+
Publications
+
+
10.1371/journal.pcbi.1008090
+
N/A
+
N/A
+
N/A
+
+
+
10.5281/zenodo.3960218
+
N/A
+
N/A
+
N/A
+
+
+
10.1016/j.neuroimage.2017.12.044
+
N/A
+
N/A
+
N/A
+
+
+
10.1371/journal.pone.0173936
+
N/A
+
N/A
+
N/A
+
+
+
10.1002/aur.1578
+
N/A
+
N/A
+
N/A
+
+
+
10.1542/peds.2015-1437
+
N/A
+
N/A
+
N/A
+
+
+
10.1186/s11689-015-9111-z
+
N/A
+
N/A
+
N/A
+
+
+
10.1016/j.rasd.2014.05.006
+
N/A
+
N/A
+
N/A
+
+
+
10.1007/s10803-014-2050-9
+
N/A
+
N/A
+
N/A
+
+
+
10.1002/aur.1301
+
N/A
+
N/A
+
N/A
+
+
+
10.1007/s10803-006-0338-0
+
N/A
+
N/A
+
N/A
+
+
+
10.1111/j.2044-835X.2010.02023.x
+
N/A
+
N/A
+
N/A
+
+
+
10.3758/BF03196739
+
N/A
+
N/A
+
N/A
+
+
+
+
Poster Presentations
+
+
Effects of social subordination and consumption of an obesogenic diet on total brain size and structural development of cortico-limbic regions: a longitudinal study in infant and juvenile macaques.
+
49th Annual Meeting of the Society for Neuroscience (SfN).
+
Chicago, IL
+
2019
+
+
Kyle MH, Kaldas A, Pincus M, Godfrey JR, Kovacs-Balint ZA, Morin EL, De Leon D, Li L, Howell BR, Styner M, Payne C, Ethun K, Wilson M, Sanchez MM
+
+
+
+
Peripheral methylation of macaque OXT and OXTR genes, oxytocin levels in cerebrospinal fluid, and social behavior.
+
15th Annual GDBBS DSAC Student Research Symposium
+
Emory University
+
2018
+
+
De Leon D, Nishitani S, Walum H, McCormack KM, Wilson ME, Smith AK, Young LJ, Sanchez MM
+
+
+
+
Peripheral methylation of macaque OXT and OXTR genes, oxytocin levels in cerebrospinal fluid, and social behavior.
+
Presented at workshop Understanding the Neuroregulatory Actions of Oxtyocin and its Potential Clinical Applications.
+
Erice, Sicily, Italy
+
2018
+
+
De Leon D, Nishitani S, Walum H, McCormack KM, Wilson ME, Smith AK, Young LJ, Sanchez MM
+
+
+
+
Sex differences and genetic variation in oxytocin receptor gene of monogamous prairie voles.
+
Neuroscience and Behavioral Biology Symposium.
+
Emory University
+
2018
+
+
Rubin KF, De Leon D, Walum H, Inuoe K, Young LJ
+
+
+
+
Peripheral methylation of macaque OXT and OXTR genes, oxytocin levels in cerebrospinal fluid, and social behavior.
+
Yerkes Research Symposium
+
Yerkes National Primate Research Center
+
2017
+
+
De Leon D, Nishitani S, Walum H, McCormack KM, Wilson ME, Smith AK, Young LJ, Sanchez MM
+
One of five winners for best presentation
+
+
+
+
Peripheral methylation of macaque OXT and OXTR genes, oxytocin levels in cerebrospinal fluid, and social behavior.
+
47th Annual Meeting of the Society for Neuroscience (SfN).
+
Washington, DC
+
2017
+
+
De Leon D, Nishitani S, Walum H, McCormack KM, Wilson ME, Smith AK, Young LJ, Sanchez MM
+
+
+
+
Peripheral methylation of macaque OXT and OXTR genes, oxytocin levels in cerebrospinal fluid, and social behavior.
+
Annual Meeting of the Society for Neuroscience (S4SN)
+
Washington, DC
+
2017
+
+
De Leon D, Nishitani S, Walum H, McCormack KM, Wilson ME, Smith AK, Young LJ, Sanchez MM
+
+
+
+
+
+
+
+
+
+
diff --git a/logo.jpg b/logo.jpg
new file mode 100644
index 0000000..98d1bcd
Binary files /dev/null and b/logo.jpg differ
diff --git a/parsing_functions.R b/parsing_functions.R
new file mode 100755
index 0000000..b6477f9
--- /dev/null
+++ b/parsing_functions.R
@@ -0,0 +1,116 @@
+# Regex to locate links in text
+find_link <- regex("
+ \\[ # Grab opening square bracket
+ .+? # Find smallest internal text as possible
+ \\] # Closing square bracket
+ \\( # Opening parenthesis
+ .+? # Link text, again as small as possible
+ \\) # Closing parenthesis
+ ",
+ comments = TRUE)
+
+# Function that removes links from text and replaces them with superscripts that are
+# referenced in an end-of-document list.
+sanitize_links <- function(text){
+ if(PDF_EXPORT){
+ str_extract_all(text, find_link) %>%
+ pluck(1) %>%
+ walk(function(link_from_text){
+ title <- link_from_text %>% str_extract('\\[.+\\]') %>% str_remove_all('\\[|\\]')
+ link <- link_from_text %>% str_extract('\\(.+\\)') %>% str_remove_all('\\(|\\)')
+
+ # add link to links array
+ links <<- c(links, link)
+
+ # Build replacement text
+ new_text <- glue('{title}{length(links)}')
+
+ # Replace text
+ text <<- text %>% str_replace(fixed(link_from_text), new_text)
+ })
+ }
+ text
+}
+
+# Take entire positions dataframe and removes the links
+# in descending order so links for the same position are
+# right next to eachother in number.
+strip_links_from_cols <- function(data, cols_to_strip){
+ for(i in 1:nrow(data)){
+ for(col in cols_to_strip){
+ data[i, col] <- sanitize_links(data[i, col])
+ }
+ }
+ data
+}
+
+# Take a position dataframe and the section id desired
+# and prints the section to markdown.
+print_section <- function(position_data, section_id){
+ position_data %>%
+ filter(section == section_id) %>%
+ arrange(desc(end)) %>%
+ mutate(id = 1:n()) %>%
+ pivot_longer(
+ starts_with('description'),
+ names_to = 'description_num',
+ values_to = 'description'
+ ) %>%
+ filter(!is.na(description) | description_num == 'description_1') %>%
+ group_by(id) %>%
+ mutate(
+ descriptions = list(description),
+ no_descriptions = is.na(first(description))
+ ) %>%
+ ungroup() %>%
+ filter(description_num == 'description_1') %>%
+ mutate(
+ timeline = ifelse(
+ is.na(start) | start == end,
+ end,
+ glue('{end} - {start}')
+ ),
+ description_bullets = ifelse(
+ no_descriptions,
+ ' ',
+ map_chr(descriptions, ~paste('-', ., collapse = '\n'))
+ )
+ ) %>%
+ strip_links_from_cols(c('title', 'description_bullets')) %>%
+ mutate_all(~ifelse(is.na(.), 'N/A', .)) %>%
+ glue_data(
+ "### {title}",
+ "\n\n",
+ "{loc}",
+ "\n\n",
+ "{institution}",
+ "\n\n",
+ "{timeline}",
+ "\n\n",
+ "{description_bullets}",
+ "\n\n\n",
+ )
+}
+
+# Construct a bar chart of skills
+build_skill_bars <- function(skills, out_of = 5){
+ bar_color <- "#969696"
+ bar_background <- "#d9d9d9"
+ skills %>%
+ mutate(width_percent = round(100*level/out_of)) %>%
+ glue_data(
+ "
",
+ "{skill}",
+ "
"
+ )
+}
+
+# Prints out from text_blocks spreadsheet blocks of text for the intro and asides.
+print_text_block <- function(text_blocks, label){
+ filter(text_blocks, loc == label)$text %>%
+ sanitize_links() %>%
+ cat()
+}
diff --git a/resume.Rmd b/resume.Rmd
new file mode 100755
index 0000000..93b4b2f
--- /dev/null
+++ b/resume.Rmd
@@ -0,0 +1,163 @@
+---
+title: Nick Strayer's Resume"
+author: Nick Strayer
+date: "`r Sys.Date()`"
+output:
+ pagedown::html_resume:
+ css: ['css/custom_resume.css', 'css/styles.css', 'resume']
+ # set it to true for a self-contained HTML page but it'll take longer to render
+ self_contained: true
+---
+
+
+```{r, include=FALSE}
+knitr::opts_chunk$set(
+ results='asis',
+ echo = FALSE
+)
+library(tidyverse)
+library(glue)
+
+# ======================================================================
+# These variables determine how the the data is loaded and how the exports are
+# done.
+
+# Is data stored in google sheets? If no data will be gather from the csvs/
+# folder in project
+using_googlesheets <- TRUE
+
+# Just the copied URL from the sheet
+positions_sheet_loc <- "https://docs.google.com/spreadsheets/d/1eTpZBkKZH8gZJXyilYMwVmcna96mBdyhLxBB_DjQwAM/edit?usp=sharing"
+
+# Is this sheet available for anyone to read? If you're using a private sheet
+# set this to false and go to gather_data.R and run the data loading manually
+# once to cache authentication
+sheet_is_publicly_readable <- TRUE
+
+# Is the goal of this knit to build a document that is exported to PDF? If so
+# set this to true to have links turned into footnotes at the end of the
+# document
+PDF_EXPORT <- FALSE
+
+
+# A global (gasp) variable that holds all the links that were inserted for
+# placement at the end
+links <- c()
+
+# ======================================================================
+# Now we source two external scripts. One contains functions for building the
+# text output and the other loads up our data from either googlesheets or csvs
+
+# Functions for building sections from CSV data
+source('parsing_functions.R')
+
+# Load data for CV/Resume
+source('gather_data.R')
+
+# Now we just need to filter down the position data to include less verbose
+# categories and only the entries we have designated for the resume
+position_data <- position_data %>%
+ filter(in_resume) %>%
+ mutate(
+ # Build some custom sections by collapsing others
+ section = case_when(
+ section %in% c('research_positions', 'industry_positions') ~ 'positions',
+ section %in% c('data_science_writings', 'by_me_press') ~ 'writings',
+ TRUE ~ section
+ )
+ )
+```
+
+
+
+Aside
+================================================================================
+
+
+![logo](logo.png){width=100%}
+
+Contact {#contact}
+--------------------------------------------------------------------------------
+
+```{r}
+contact_info %>%
+ glue_data("- {contact}")
+```
+
+
+
+Language Skills {#skills}
+--------------------------------------------------------------------------------
+
+```{r}
+build_skill_bars(skills)
+```
+
+
+
+Open Source Contributions {#open-source}
+--------------------------------------------------------------------------------
+
+All projects available at `github.com/nstrayer/`
+
+
+- `shinysense`: R package to use sensor data in Shiny apps
+- `tuftesque`: Hugo theme (behind LiveFreeOrDichotomize.com)
+- `sbmR`: R package for fitting stochasitic block models
+
+
+More info {#more-info}
+--------------------------------------------------------------------------------
+
+See full CV at nickstrayer.me/cv for more complete list of positions and publications.
+
+
+Disclaimer {#disclaimer}
+--------------------------------------------------------------------------------
+
+Made w/ [**pagedown**](https://github.com/rstudio/pagedown).
+
+Source code: [github.com/nstrayer/cv](https://github.com/nstrayer/cv).
+
+Last updated on `r Sys.Date()`.
+
+
+
+Main
+================================================================================
+
+Desirée De Leon {#title}
+--------------------------------------------------------------------------------
+
+```{r}
+print_text_block(text_blocks, 'intro')
+```
+
+
+
+Education {data-icon=graduation-cap data-concise=true}
+--------------------------------------------------------------------------------
+
+```{r}
+position_data %>% print_section('education')
+```
+
+
+
+Selected Positions {data-icon=suitcase}
+--------------------------------------------------------------------------------
+
+```{r}
+position_data %>% print_section('positions')
+```
+
+
+
+Selected Writing {data-icon=newspaper}
+--------------------------------------------------------------------------------
+
+```{r}
+position_data %>% print_section('writings')
+```
+
+