Skip to content

Commit

Permalink
docs: update to render with mkdocs
Browse files Browse the repository at this point in the history
  • Loading branch information
ejseqera committed Jan 29, 2024
1 parent 910cc66 commit e9f3405
Show file tree
Hide file tree
Showing 35 changed files with 184 additions and 85 deletions.
45 changes: 0 additions & 45 deletions demo/demo_overview.md

This file was deleted.

7 changes: 5 additions & 2 deletions demo/add_a_dataset.md → demo/docs/add_a_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ When running pipelines on the Cloud, this samplesheet has to be made available i
The [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline works with input datasets (samplesheets) containing sample names, fastq file locations, and indications of strandedness. The Seqera Community Showcase sample dataset for _nf-core/rnaseq_ looks like this:

**Example rnaseq dataset**
<center>

| sample | fastq_1 | fastq_2 | strandedness |
| ------------------- | ------------------------------------ | ------------------------------------ | ------------ |
Expand All @@ -22,12 +23,14 @@ The [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline works with inpu
| RAP1_UNINDUCED_REP2 | s3://nf-core-awsmegatests/rnaseq/... | | reverse |
| RAP1_IAA_30M_REP1 | s3://nf-core-awsmegatests/rnaseq/... | s3://nf-core-awsmegatests/rnaseq/... | reverse |

</center>

Download the nf-core/rnaseq [samplesheet_test.csv](samplesheet_test.csv) provided in this repository on to your computer.

## 2. Add a Dataset
## 2. Add the Dataset

Go to the 'Datasets' tab and click 'Add Dataset'.

![Adding a Dataset](docs/images/sp-cloud-add-a-dataset.gif)
![Adding a Dataset](assets/sp-cloud-add-a-dataset.gif)

Specify a name for the dataset such as 'nf-core-rnaseq-test-dataset', description, include the first row as header, and upload the CSV file provided in this repository. This CSV file specifies the paths to 7 small FASTQ files for a sub-sampled Yeast RNAseq dataset.
6 changes: 3 additions & 3 deletions demo/add_a_pipeline.md → demo/docs/add_a_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The Launchpad allows you to launch and manage Nextflow pipelines and associated

To add a pipeline, click on the **'Add Pipeline'** button. As an example, we will add the [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline to the Launchpad.

![Adding nf-core/rnaseq pipeline](docs/images/sp-cloud-add-rnaseq.gif)
![Adding nf-core/rnaseq pipeline](assets/sp-cloud-add-rnaseq.gif)

Specify a name, description, and click on pre-existing AWS compute environment to execute on.

Expand All @@ -25,11 +25,11 @@ Additionally, specify a version of the pipeline as the 'Revision number'. You ca
Pipeline parameters and Nextflow configuration settings can also be specified as you add the pipeline to the Launchpad.

For example, a pipeline can be pre-populated to run with specific parameters on the Launchpad.
![Adding pipeline parameters](docs/images/sp-cloud-pipeline-params.gif)
![Adding pipeline parameters](assets/sp-cloud-pipeline-params.gif)

## 4. Pre-run script and additional options

You can run custom code either before or after the execution of the Nextflow script. These text fields allow you to enter shell commands.

Pre-run scripts are executed in the nf-launch script prior to invoking Nextflow processes. Pre-run scripts are useful for executor setup (e.g., use a specific version of Nextflow) and troubleshooting.
![Specify NF version in pre-run script](docs/images/sp-cloud-pre-run-options.gif)
![Specify NF version in pre-run script](assets/sp-cloud-pre-run-options.gif)
File renamed without changes
File renamed without changes
Binary file added demo/docs/assets/landing_page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
4 changes: 2 additions & 2 deletions demo/data_explorer.md → demo/docs/data_explorer.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ With Data Explorer, you can browse and interact with remote data repositories fr
To view bucket details such as the cloud provider, bucket address, and credentials, select the information icon next to a bucket in the Data Explorer list.

- Search and filter buckets
Search for buckets by name and region (e.g., region:eu-west-2) in the search field, and filter by provider.
Search for buckets by name and region (e.g., `region:eu-west-2`) in the search field, and filter by provider.

- Hide buckets from list view
Workspace maintainers can hide buckets from the Data Explorer list view. Select multiple buckets, then select Hide in the Data Explorer toolbar. To hide buckets individually, select Hide from the options menu of a bucket in the list.
Expand All @@ -30,4 +30,4 @@ From the View cloud bucket page, you can:
1. Preview and download files: Select the download icon in the 'Actions' column to download a file directly from the list view. Select a file to open a preview window that includes a Download button.
2. Copy bucket/object paths: Select the Path of an object on the cloud bucket page to copy its absolute path to the clipboard. Use these object paths to specify input data locations during pipeline launch, or add them to a dataset for pipeline input.

![Data Explorer bucket](docs/images/sp-cloud-data-explorer.gif)
![Data Explorer bucket](assets/sp-cloud-data-explorer.gif)
18 changes: 18 additions & 0 deletions demo/docs/demo_overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
### 1. Login to seqera.io

Log into Seqera Platform, either through a GitHub account, Google account, or an email address.

If an email address is provided, Seqera Cloud will send an authentication link to the email address to login with.

![Seqera Platform Cloud login](assets/sp-cloud-signin.gif)

### 2. Navigate into the seqeralabs/showcase Workspace

All resources in Seqera Platform live inside a Workspace, which in turn belong to an Organisation. Typically, teams of colleagues or collaborators will share one or more workspaces. All resources in a Workspace (i.e. pipelines, compute environments, datasets) are shared by members of that workspace.

Navigate into the `seqeralabs/showcase` Workspace.

![Seqera Labs Showcase Workspace](assets/go-to-workspace.gif)

### 3. TODO User settings

38 changes: 38 additions & 0 deletions demo/docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Seqera Platform: Demonstration Walkthrough

Walkthrough documentation of [Seqera Platform](https://seqera.io/)
---
![](assets/landing_page.png){ .right .image}

[:fontawesome-solid-user: Login to Seqera Platform](https://tower.nf/login){ .md-button }
---

---

## Overview
This guide provides a walkthrough of a standard Seqera Platform demonstration. The demonstration will describe how to add a pipeline to the Launchpad, launch a workflow with pipeline parameters, monitor a Run, and examine the run details in several different parts. The demonstration will also highlight key features such as the Pipeline Optimization, Data Explorer, and Compute Environment creation.

More specifically, this demonstration will focus on using the [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline as an example and executing the workflow on AWS Batch.

---

## Requirements

- A [Seqera Platform Cloud](https://seqera.io/login) account
- Access to a Workspace in Seqera Platform
- :fontawesome-brands-aws: An [AWS Batch Compute Environment created in that Workspace](https://docs.seqera.io/platform/23.3.0/compute-envs/aws-batch)
- The [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline repository
- Samplesheet to create a Dataset on the Platform used to run minimal test RNAseq data (see [samplesheet_test.csv](./samplesheet_test.csv) file in this repository)

---

## Sections
[:material-check-circle:]() [Overview of the Platform](./demo_overview.md) <br/>
[:material-check-circle:]() [Add a Pipeline to the Launchpad](./add_a_pipeline.md) <br/>
[:material-check-circle:]() [Add a Dataset to Seqera Platform](./add_a_dataset.md) <br/>
[:material-check-circle:]() [Launch a Pipeline](./launch_pipeline.md) <br/>
[:material-check-circle:]() [Runs and Monitoring your workflow](./monitor_run.md) <br/>
[:material-check-circle:]() [Examine the run and task details](./run_details.md) <br/>
[:material-check-circle:]() [Resume a Pipeline](./resume_pipeline.md) <br/>
[:material-check-circle:]() [Data Explorer](./data_explorer.md) <br/>
[:material-check-circle:]() [Optimize your Pipeline](./pipeline_optimization.md) <br/>
6 changes: 3 additions & 3 deletions demo/launch_pipeline.md → demo/docs/launch_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Navigate back to the Launchpad to being executing the newly added nf-core/rnaseq

Select 'Launch' next to the pipeline of your choice to open the pipeline launch form.

![Launching a Pipeline](docs/images/sp-cloud-launch-form.gif)
![Launching a Pipeline](assets/sp-cloud-launch-form.gif)

Seqera uses a [nextflow_schema.json](https://github.com/nf-core/rnaseq/blob/master/nextflow_schema.json) file in the root of the pipeline repository to dynamically create a form with the necessary pipeline parameters.

Expand All @@ -22,10 +22,10 @@ All pipelines contain at least these parameters:

For the 'input' parameter, click on the text box and click on the name of the dataset added in the previous step.

![Input parameters](docs/images/sp-cloud-launch-parameters-input.gif)
![Input parameters](assets/sp-cloud-launch-parameters-input.gif)

For the 'outdir' parameter, specify an S3 directory path manually, or select Browse to specify a cloud storage directory using Data Explorer.

![Output parameters](docs/images/sp-cloud-launch-parameters-outdir.gif)
![Output parameters](assets/sp-cloud-launch-parameters-outdir.gif)

The remaining fields of the pipeline parameters form will vary for each pipeline, dependent on the parameters specified in the pipeline schema. When you have filled the necessary launch form details, select 'Launch'.
20 changes: 10 additions & 10 deletions demo/monitor_run.md → demo/docs/monitor_run.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ Upon launching, you'll be navigated to the 'Runs' tab which contains all execute

The Runs tab contains all previous job executions. Each new or resumed job is given a random name, e.g., "grave_williams". Each row corresponds to a specific job. As a job executes, it can transition through the following states:

submitted: Pending execution
running: Running
succeeded: Completed successfully
failed: Successfully executed, where at least one task failed with a terminate error strategy
cancelled: Stopped forceably during execution
- **submitted**: Pending execution
- **running**: Running
- **succeeded**: Completed successfully
- **failed**: Successfully executed, where at least one task failed with a terminate error strategy
- **cancelled**: Stopped forceably during execution
unknown: Indeterminate status

![Viewing Runs](docs/images/sp-cloud-view-all-runs.gif)
![Viewing Runs](assets/sp-cloud-view-all-runs.gif)

As the pipeline begins to run, you will see the Runs page become populated with the following details:

Expand All @@ -21,13 +21,13 @@ As the pipeline begins to run, you will see the Runs page become populated with
- Execution Log
- Datasets used, and Reports generated

![View the rnaseq run](docs/images/sp-cloud-run-info.gif)
![View the rnaseq run](assets/sp-cloud-run-info.gif)

## 1. View Run info

On the Runs page will be General information about who executed the run, when, the Git hash used and tag, as well as additional details about the compute environment used, and the version of Nextflow.

![General run information](docs/images/general-run-details.gif)
![General run information](assets/general-run-details.gif)

## 2. View Reports

Expand All @@ -37,6 +37,6 @@ Reports allow you to directly visualise supported file types or to download them

For example, for the nf-core/rnaseq pipeline, you can view the MultiQC report generated.

![Reports tab](docs/images/reports-tab.png)
![Reports tab](assets/reports-tab.png)

![Reports MultiQC preview](docs/images/reports-preview.png)
![Reports MultiQC preview](assets/reports-preview.png)
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,12 @@ When a run completes successfully, an optimized profile is created. This profile

Navigate back to the Launchpad, click on the nf-core/rnaseq Pipeline added, and click on the 'Lightbulb' icon to view the optimized profile.

![Optimized configuration](docs/images/optimize-configuration.gif)
![Optimized configuration](assets/optimize-configuration.gif)

You can verify the optimized configuration of a given run by inspecting the resource usage plots for that run and these fields in the run's task table:

CPU usage: pcpu
Memory usage: peakRss
Runtime: start and complete
| Description | Key |
|---------------|---------------------------|
| CPU usage | `pcpu` |
| Memory usage | `peakRss` |
| Runtime | `start` and `complete` |
6 changes: 3 additions & 3 deletions demo/resume_pipeline.md → demo/docs/resume_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@ Seqera Platform enables you to use Nextflow's resume functionality to resume a w

Users with appropriate permissions can change the compute environment when resuming a run. The new compute environment must have access to the original run work directory.

This means that the new compute environment must have a work directory that matches the root path of the original pipeline work directory. For example, if the original pipeline work directory is s3://foo/work/12345, the new compute environment must have access to s3://foo/work.
This means that the new compute environment must have a work directory that matches the root path of the original pipeline work directory. For example, if the original pipeline work directory is `s3://foo/work/12345`, the new compute environment must have access to `s3://foo/work`.

![Resuming a run](docs/images/sp-cloud-resume-a-run.gif)
![Resuming a run](assets/sp-cloud-resume-a-run.gif)

## 2. Task Status and Cached Processes

The Runs page for a workflow will display the status of tasks in real time as they progress from 'Submitted' to 'Running' to 'Succeeded' or 'Failed'.

If you are resuming a run that had tasks that were completed successfully, you will see a number of tasks shown as 'Cached'.

![Cached processes](docs/images/sp-cloud-cached-processes.gif)
![Cached processes](assets/sp-cloud-cached-processes.gif)
28 changes: 15 additions & 13 deletions demo/run_details.md → demo/docs/run_details.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,40 +21,42 @@ Scroll down the Runs page and you will see:
- Workflow metrics (i.e. CPU efficiency, memory efficiency)
- Task details table for every task in the workflow

## 3. Task details table
## 3. Task details window

Select a task in the task table to open the Task details dialog. The dialog has three tabs: About, Execution log and Data Explorer.

**About**

The About tab provides the following information:

1. Name: Process name and tag
1. **Name**: Process name and tag

2. Command: Task script, defined in the pipeline process
2. **Command**: Task script, defined in the pipeline process

3. Status: Exit code, task status, attempts
3. **Status**: Exit code, task status, attempts

4. Work directory: Directory where the task was executed
4. **Work directory**: Directory where the task was executed

5. Environment: Environment variables that were supplied to the task
5. **Environment**: Environment variables that were supplied to the task

6. Execution time: Metrics for task submission, start, and completion time
6. **Execution time**: Metrics for task submission, start, and completion time

7. Resources requested: Metrics for the resources requested by the task
7. **Resources requested**: Metrics for the resources requested by the task

8. Resources used: Metrics for the resources used by the task
8. **Resources used**: Metrics for the resources used by the task


![Task details window](assets/task-details.gif)

**Execution log**

The Execution log tab provides a real-time log of the selected task's execution. Task execution and other logs (such as stdout and stderr) are available for download from here, if still available in your compute environment.

![Task details window](docs/images/task-details.gif)

## 4. Task details in Data Explorer

The Data Explorer tab allows you to view the log files and output files generated for each task in its' working directory within the Platform.
The Data Explorer tab allows you to view the log files and output files generated for each task in it's working directory within the Platform.

You can view, download, and retrieve the link for these intermediate files from the Explorer tab.
You can view, download, and retrieve the link for these intermediate files stored in the Cloud from the Explorer tab.

![Task data explorer](docs/images/sp-cloud-task-data-explorer.gif)
![Task data explorer](assets/sp-cloud-task-data-explorer.gif)
File renamed without changes.
29 changes: 29 additions & 0 deletions demo/docs/style.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
.md-typeset h1, .md-typeset h2, .md-typeset h3 {
color: #000;
font-weight: 600;
}

.left{
float: left;
}

.right{
float: right;
width: 40%;
}

.md-footer {
margin-top: 40px;
background-color: #009485;
}

.md-footer__inner {
display: none !important;
}


@media (max-width: 600px) {
.image {
display:none;
}
}
Loading

0 comments on commit e9f3405

Please sign in to comment.