Skip to content

stanfordbeads/New_trap_code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The repository for work done with and on the new trap

Here we store all stolen code from the old trap and also put in newly developed stuff

If you want to get started with GitHub there are a few introductional videos I randomly found on YouTube: https://www.youtube.com/watch?v=O72FWNeO-xY&list=PL5-da3qGB5IBLMp7LtN8Nc3Efd4hJq0kD&index=1

First of all I would recommend to create and use your own GitHub account rather than the general account (stanfordbeads). The general procedere can be summarized as following:

1a. Clone the repository on your account at the beads-dataserver2

1b. Clone the repository to your own computer [You will see, this will not be necessary as you should be able to access the jupyterhub at any time from everywhere]

  1. Writing code? Good! But check first if already exists (Chas and AlexR have written a lot of stuff). If you are not sure, better ask to not do a job twice.

  2. You wrote new code? Add it to the git repository by pushing it.

  3. You improved existing code? Why not going for a pull request in order to get it cross checked.

  4. In all cases, comment(!) your code as well as your changes. Write more than less of descriptive text. Always a good idea. Always.

Accessing the JupyterHub

Using the JupyterHub Terminal

Data saving In our meeting on 05/08/2019 we decided to do the following steps:

Save data in the same way as in the old trap. This includes two files - one for the data stream from the FPGA and one from the DAQ(mainly the electrode data). The data should be saved according to its date to a folder. Within the date folder a substrcture will be created automatically given on what ever measurment was taken. The list includes: Data

  • without radial feedback at different heights
  • with radial feedback
  • after pump down
  • coarse charging data
  • diagonalization data
  • mass measurement
  • fine discharging data
  • height finding data
  • measurement of interest (add further like spinning is of course done as required)

Database Given the .XML output we will implement a search and selectable data base in order to do systematic research through data.

Data processor We will investigate the old data processor and use it. However, a parallelization should be added.

High-level analysis

Storing LabView code

Use the wiki pages

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages