TigerData is a comprehensive set of data storage and management tools and services that provides storage capacity, reliability, functionality, and performance to meet the needs of a rapidly changing research landscape and to enable new opportunities for leveraging the power of institutional data.
This application provides a front end for users to create and manage projects that live in the TigerData infrastructure.
- Auto-built code documentation is available at https://pulibrary.github.io/tigerdata-app/}
- Design documents and meeting notes are in Google Drive
- RDSS internal notes are in a separate directory
- A set of requirements derived from early sketches is here.
- We're writing a "Missing Manual" for the subset of Mediaflux that is used by TigerData.
The conceptual diagrams showcase the user (i.e. a researcher or SysAdmin) and their typical interactions with the TigerData-rails application. The conceptual designs were created based on the TigerData design framework, and may be subject to change dependent upon any updates to the framework.
The system will eventually have many roles. Please refer to the docs for a description of the system roles
- Check out code and
cd
- Install tool dependencies; If you've worked on other PUL projects they will already be installed.
- Install asdf dependencies with asdf
asdf plugin add ruby
asdf plugin add node
asdf plugin add yarn
asdf plugin add java
asdf install
- ... but because asdf is not a dependency manager, if there are errors, you may need to install other dependencies. For example:
brew install gpg
- OR - Install dependencies with brew and chruby
ruby-install 3.2.3 -- --with-openssl-dir=$(brew --prefix openssl@1.1)
- If you get "error: use of undeclared identifier 'RUBY_FUNCTION_NAME_STRING'" while updating, make sure your Xcode toolks are up to date.
- close the terminal window and open a new terminal
chruby 3.2.3
ruby --version
- Install language-specific dependencies
bundle install
yarn install
On a Mac with an M1 chip, bundle install
may fail. This suggestion helped:
gem install eventmachine -v '1.2.7' -- --with-openssl-dir=$(brew --prefix libressl)
brew install pkg-config
bundle install
We use lando to run services required for both test and development environments.
Start and initialize database services with:
bundle exec rake servers:start
To stop database services:
bundle exec rake servers:stop
or lando stop
You will also want to run the vite development server:
bin/vite dev
Authentication and authorization is restricted to a few selected users. Make sure to run the rake task to pre-populate your local database:
bundle exec rake load_users:from_registration_list
If your name is not on the registration list see steps below under "User Registration List" for instructions on how to add yourself.
Documentation for starting the mediaflux server can be found at doc/local_development
- Once mediaflux is running locally
bundle exec rake schema:create
By default, there exists for the MediaFlux deployment a user account with the following credentials:
- domain:
system
- user:
manager
- password:
change_me
Alternatively, one may please use docker/bin/shell
to create a terminal session within the container and find individual accounts within the file /setup/config/users.json
.
The MediaFlux aterm
may be accessed using http://0.0.0.0:8888/aterm/
The MediaFlux desktop client may be accessed using http://0.0.0.0:8888/desktop/
One may start and access the Thick Client using the Java Virtual Machine with the following steps:
$ docker/bin/start
# Within another terminal session, please invoke:
$ docker cp mediaflux:/usr/local/mediaflux/bin/aterm.jar ~/aterm.jar
$ java -Xmx4g -Djava.net.preferIPv4Stack=true -jar ~/aterm.jar
> server.identity.set :name carolyn
> display font-size 18
> display prompt "carolyn > "
> display save
The MediaFlux service documentation may be accessed using http://0.0.0.0.:8888/mflux/service-docs/
asdf install
bundle install
yarn install
bundle exec rake servers:start
- Fast:
bundle exec rspec spec
- Run in browser:
RUN_IN_BROWSER=true bundle exec rspec spec
- Run connected to CI mediaflux instance:
MFLUX_CI=true MFLUX_CI_PASSWORD="[MFLUX_CI_PASSWORD]" bundle exec rspec spec
MFLUX_CI_PASSWORD
can be found in the tigerdata-config vault
- To run just the tests that are integration tests, we will need to pass a flag that will only run tests that are tagged as an integration test
bundle exec rspec --tag integration
bundle exec rails s -p 3000
- Access application at http://localhost:3000/
Deploy with Capistrano (we are intending to have a deployment mechanism with Ansible Tower, but that is not yet implemented)
bundle exec cap production deploy
or
bundle exec cap staging deploy
To remove a machine from the load balancer you can use the following command:
bundle exec cap --hosts=tigerdata-prod1 production application:remove_from_nginx
Notice that the name of the machine (tigerdata-prod1
in the example above) must match with the name of the machine indicated in config/deploy
for the environment that you are working. When execution of this command is successful you should see a message with the changes made on the server, if you see nothing it is probably because you are not passing the right hosts
.
You can use application:serve_from_nginx
to re-add the machine to the load balancer.
Mailcatcher is a gem that can also be installed locally. See the mailcatcher documentation for how to run it on your machine.
To See mail that has been sent on the Staging and QA servers you can utilize capistrano to open up both mailcatcher consoles in your browser (see below). Look in your default browser for the consoles
cap staging mailcatcher:console
cap qa mailcatcher:console
Emails on production are sent via Pony Express.
For local development, add yourself as a SuperUser to the TigerData preliminary registration list and follow these instructions:
To save updates and make changes to appointed users for early testing of the TigerData site:
- Make the requested changes to the Google spreadsheet
- Save those updated changes
- Download the file as a .CSV file
- Copy the downloaded .CSV file to
data
>user_registration_list.csv
- run SED to remove the ^M from the file
sed -e "s/\r//g" user_registration_list.csv > user_registration_list_production.csv
- Open a PR to check the updated file into version control
- Once that PR is merged, release and deploy the code. This will automatically run the
load_users.rake
rake task.
Sidekiq is used to run backgroud jobs on the server. The jobs are created by ActiveJob and ActiveMailer.
You can go to the following urls to see the sidekiq dashboard, but because these environments are load balanced, that view will switch back and forth between hosts.
- https://tigerdata-staging.lib.princeton.edu/sidekiq
- https://tigerdata-qa.princeton.edu/sidekiq
- https://tigerdata-app.princeton.edu/sidekiq
Instead, use the capistrano task, which will open an ssh tunnel to all nodes in a tigerdata environment (staging, qa or production), with a tab in your browser for each one.
cap staging sidekiq:console
cap qa sidekiq:console
cap production sidekiq:console
Workers must be running on each server in order for mail to be sent and background jobs to be run.
The sidekiq workers are run on the server via a service, tiger-data-workers
. To see the status on the workers on the server run sudo service tiger-data-workers status
. You can restart the workers by running sudo service tiger-data-workers restart
.
To attach the output of an existing File Inventory Job to a user we can run the rake task file_inventory:attach_file
.
- Log into one of the production machines
- Find the
job_id
of the job that you want to attach the file to. You can do this via the Rails console (for example finding the last job for the user that is having problems) - Find the file that you want to attach to the job. Files are under
/mnt/nfs/tigerdata
and each file is named after theirjob_id
. Copy this file to a file named after thejob_id
that you will attach it to. - Run the rake task giving it the
job_id
and the name of the file that you want to attach to it.
For example if the job_id
is "xxxx-yyyy-zzzz" you'll run the Rake task as follows:
bundle exec rake file_inventory:attach_file[xxxx-yyyy-zzzz,/mnt/nfs/tigerdata/xxxx-yyyy-zzzz.csv]
Technically you don't need to copy the source file to a new file named after the job_id
that you are interested but keeping each file named after the job that they belong keeps things tiddy. Plus since each file will be cleaned up on their own schedule having them separate also prevents the file from dissapearing for one user then it's cleaned up for another user.