Skip to content

Commit ca4326e

Browse files
committed
Beta-3. Added new method of setup and grabbing files.
Added a new way to setup and have the whole system works. To better protect my sanity, Dr. Hampton suggest going a route of grabbing files using wget from my own www directory, so I don't need to hardcode my password into anything. So, instead of using wget I decided to go with curl as it can easily overwrite files in the process of downloading them which is what we want after-all. Also changed the way the system dies when coming across a bad pending job status (learn something new about uge everyday). Now it will write to a log what the status was that caused the death, instead of mysteriously dying with exit code 20. Updated the readme to reflect new additions. For a cosmetic appeal, I added a faveicon.ico to be displayed on all of the pages. This is the same one found on crc.nd.edu, so if you're an outsider using this repo I'd suggest changing it to what your organization is.
1 parent 1fb7b2b commit ca4326e

File tree

10 files changed

+188
-19
lines changed

10 files changed

+188
-19
lines changed

README

Lines changed: 36 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -32,33 +32,55 @@
3232
you.
3333
First you should place queue_mapd.py on the front end you plan on using.
3434
Once its theres, you should run it with ./queue_mapd.py --setup.
35-
This will create setup files the setup script is looking for.
35+
This will create setup files the setup script is looking for which contain
36+
the nodes the queue's currently have in plain text. The setup script uses
37+
those files to create the dir's for each of the nodes.
3638

39+
As of beta-3, there are now two ways you could have this run. Either:
40+
(1) Hard-code your password into a script which gathers necesarry files
41+
from the front-end by using sshpass and scp. The setup script for this
42+
method also uses sshpass and scp. Or (2) Have the daemon write the necesarry
43+
files to a location which is accessible from the web by way of curl; i.e.
44+
/foo/bar/www/index-long.html etc. The setup script for this method also
45+
relies on curl and the use of some form of web access.
46+
47+
Method (1) :
48+
------------------------------------------------------------------------
3749
Next you should view the setup.sh script in your favorite editor (vim)
3850
and enter in your information where it is specified. If you can think of
3951
a better way to do it and you understand how its working, go ahead and go
4052
your own way. Enter both your CRC and local info. To be safe, make sure
4153
this script is only read-able by yourself.
54+
55+
Once the setup script.sh script is configured, you should configure the
56+
grab_queue_files.sh script the same way you configured the setup script.
4257

4358
Once its configured, run the setup script. This will create all dir's to
4459
be used later on.
4560

46-
If the script completed nicely, then configure the grab_queue_files.sh
47-
script. Do the same as before, entering in info where it is specifed
48-
into the script itself.
49-
50-
Once configuring that is done, you can manually run the script to see
51-
if everything is working. Just go to localhost in a browser to test it.
61+
Method (2):
62+
------------------------------------------------------------------------
63+
View the curl_setup.sh script in your favorite text editor and enter in
64+
any information which is required such as paths of files. If you do not
65+
want to code in your server password, you can change the sudoers file to
66+
allow whatever user your cronning this job as to run all of the commands
67+
without needed a sudo password.
68+
69+
Now that the curl_setup.sh is configured, you should do the same with the
70+
curl_queue_files.sh file. This is the script which will indefinetely grab
71+
the html files for use in your webpages.
5272

53-
If everything appears to be working, you should next make the
54-
grab_queue_files.sh a cron job. Do this by typing: 'crontab -e' and
55-
going to the last line of the file under all of the comments. add the line
56-
'*/2 * * * * PATH-TO-DIR/Queue_map/grab_queue_files.sh'
73+
Both Methods:
74+
-----------------------------------------------------------------------
75+
If everything appears to be working, you should next make the non-setup
76+
file you configured ealier into a a cron job. Do this by typing: 'crontab -e'
77+
and going to the last line of the file under all of the comments. add the line
78+
'*/2 * * * * PATH-TO-DIR/Queue_map/(preferred)_queue_files.sh)'
5779
This will run the grabbing script every two minutes, which will automatically
5880
keep your info up to date.
5981

60-
Besides Apache, PHP, you must have 'ssh-pass' installed on your webserver
61-
in order for this to work correctly. Debian/Ubuntu users can:
82+
Besides Apache, PHP, if you chose Method(1) you must have 'ssh-pass' installed
83+
on your webserver in order for this to work correctly. Debian/Ubuntu users can:
6284
'sudo apt-get install ssh-pass'. RPM users, I haven't tried it yet.
6385

6486
* This has been tested on the latest versions of Firefox, Vivaldi, and Opera. Its unknown to me
@@ -81,4 +103,4 @@
81103
Problems with a Queue or specific node: seek crc support -
82104
crcsupport@listserv.nd.edu
83105

84-
Version: 0.8-Beta-2.1
106+
Version: 0.8-Beta-3

curl_queue_files.sh

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
#!/bin/bash
2+
3+
#Bash script to gather the html files for a Queuemap webpage. It will grab the files from a web-hosted service,
4+
#like the www directory in my Public afs space.
5+
6+
#local info:
7+
pswd="LOCAL(SERVER) PASSWORD HERE!!!" #or configure sudoers file
8+
9+
desired_path="DESIRED PATH FOR SERVER HERE!" #typicaly /var/www/html (you don't need the last '/')
10+
11+
curl_url="URL FOR CURL GOES HERE" #Don't need last '/'
12+
#Gathering files from CRCFE using wget
13+
14+
curl -o index-long.html $curl_url/index-long.html
15+
16+
curl -o index-debug.html $curl_url/index-debug.html
17+
18+
curl -o pending_content.html $curl_url/pending.html
19+
20+
curl -o sub-debug.tar.gz $curl_url/sub-debug.tar.gz
21+
22+
curl -o sub-long.tar.gz $curl_url/sub-long.tar.gz
23+
24+
#Moving the files once on the web-server to proper locations
25+
26+
echo $pswd | sudo -S mv index-long.html $desired_path/Long/index-long.html
27+
28+
echo $pswd | sudo -S mv index-debug.html $desired_path/Debug/index-debug.html
29+
30+
echo $pswd | sudo -S mv pending_content.html $desired_path/Pending/pending_content.html
31+
32+
#Setting up node files:
33+
34+
tar -xzf sub-debug.tar.gz
35+
for i in debug@*;
36+
do
37+
echo $pswd | sudo -S mv $i $desired_path/Debug/$i/sub-index.html
38+
done
39+
40+
tar -xzf sub-long.tar.gz
41+
for j in d6copt*;
42+
do
43+
echo $pswd | sudo -S mv $j $desired_path/Long/$j/sub-index.html
44+
done

curl_setup.sh

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
#!/bin/bash
2+
3+
#Script to set up the CRC Queue heat-map. Thats what it's called I guess.
4+
#8/3/16
5+
#Requires sudo permission. This script assumes you have apache2 running and
6+
#php7.0(or something compatible). Default path is listed below.
7+
8+
#This needs to be an absolute path
9+
desired_path="/var/www/html" #you can change this if need be
10+
11+
#CRC info to gather files
12+
webpage_url="URL FOR CRC FILES HERE"
13+
14+
#Local info to mv files to protected areas
15+
psword="LOCAL (WEBSERVER) SUDO PASSWORD"
16+
long_file="long_nodes.txt" #These can stay this way
17+
debug_file="debug_nodes.txt" # "
18+
19+
#Creating dirs moving files to proper locations
20+
echo "Creating Debug, Long, and Pending directories in $desired_path . . ."
21+
22+
echo $psword | sudo -S mkdir $desired_path/Debug
23+
echo $psword | sudo -S mkdir $desired_path/Long
24+
echo $psword | sudo -S mkdir $desired_path/Pending
25+
26+
echo "Moving index's to their rightful places . . ."
27+
echo $psword | sudo -S cp index-long.php $desired_path/Long/index.php
28+
echo $psword | sudo -S cp index-debug.php $desired_path/Debug/index.php
29+
echo $psword | sudo -S cp index-pending.php $desired_path/Pending/index.php
30+
31+
echo "Transferring templates to $desired_path . . ."
32+
echo $psword | sudo -S cp -r templates $desired_path/templates
33+
34+
echo "Transferring styles.css to $desired_path . . ."
35+
echo $psword | sudo -S cp styles.css $desired_path/
36+
37+
echo "Gathering node-list files from $webpage_url . . ."
38+
curl -o debug_nodes.txt $webpage_url/debug_node_list.html
39+
curl -o long_nodes.txt $webpage_url/long_node_list.html
40+
41+
# Creating and initialiizing each node's dir etc
42+
#Long-queue nodes
43+
echo "Creating each node's dir at $desired_path/Long . . ."
44+
while IFS= read -r line
45+
do
46+
echo $psword | sudo -S mkdir $desired_path/Long/$line
47+
echo $psword | sudo -S cp sub-index.php $desired_path/Long/$line/index.php
48+
done < "$long_file"
49+
50+
echo "Creating each node's dir at $desired_path/Debug . . ."
51+
52+
#Debug-queue nodes
53+
while IFS= read -r line
54+
do
55+
echo $psword | sudo -S mkdir $desired_path/Debug/$line
56+
echo $psword | sudo -S cp sub-index.php $desired_path/Debug/$line/index.php
57+
done < "$debug_file"
58+
59+
60+
echo "-----------------------COMPLETE-----------------------"
61+
echo ""
62+
63+
echo "Setup complete. Please quickly verify everything was made correctly."
64+
echo "You can do this by opening a browser and going to localhost and navigating"
65+
echo "to your Long or Debug directories."
66+
echo ""
67+
echo "Please be sure that the python script is running on a front end, and that"
68+
echo "all scripts are configured to the location the script is going to be spitting"
69+
echo "out at."
70+
echo ""
71+
echo "Once you know things are where they should be, make sure you configured the"
72+
echo "grab_queue_files.sh script for your info to grab the files."
73+
echo "If it is configured already, either of the two lines to crontab -e:"
74+
echo "If you chose method(1) as described in README, add this to cron:"
75+
echo "*/2 * * * * $(pwd)/grab_queue_files.sh"
76+
echo ""
77+
echo "If you chose method(2), add this to cron:"
78+
echo "*/2 * * * * $(pwd)/curl_queue_files.sh"

queue_mapd.py

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
from that information for a 'heat' map of the queue. This partial page is a component to
66
be included from index.php on the current web-server. There are two other components,
77
header.html and footer.html for each: Debug and Long. Latest update:
8-
Aug 12th, 2016 v0.8.beta-2.1
8+
Aug 17th, 2016 v0.8.1-beta-3
99
Exit codes: 0 - Good
1010
20 - Bad Pending Job status"""
1111

@@ -197,7 +197,8 @@ def set_status(self, status):
197197
elif status == 'Eqw':
198198
status = 'Error'
199199
else:
200-
sys.exit(20)
200+
#sys.exit(20)
201+
write_log(status, 20)
201202
self.status = status
202203
return
203204

@@ -216,6 +217,8 @@ def get_date(self):
216217
#^--------------------------------------------------------- class Pending(Job)
217218

218219
#If you change the names here, don't forget to change them in cron job script and php files on webserver!
220+
#If you are using a curl method of obtaining files, then make sure you change path to your www dir etc!
221+
#(as in afs/crc.nd.edu/user/j/jdoe/www/index-long.html)
219222
LONG_SAVE_FILE = 'index-long.html'
220223
DEBUG_SAVE_FILE = 'index-debug.html'
221224
PENDING_SAVE_FILE = 'pending.html'
@@ -512,9 +515,9 @@ def tar_node_files(node_list, Queue):
512515
why this script should be running in its own directory."""
513516

514517
if Queue == 'Long':
515-
save_name = 'sub-long.tar.gz'
518+
save_name = '/afs/crc.nd.edu/user/c/ckankel/www/sub-long.tar.gz'
516519
else:
517-
save_name = 'sub-debug.tar.gz'
520+
save_name = '/afs/crc.nd.edu/user/c/ckankel/www/sub-debug.tar.gz'
518521

519522
tar = tarfile.open(save_name, 'w:gz')
520523
for node in node_list:
@@ -565,6 +568,21 @@ def write_setup_files(node_list, queue_name):
565568
return
566569
#^--------------------------------------------------------- write_setup_files(node_list)
567570

571+
def write_log(info, code):
572+
"""Method to write to a log if an error occurs and the program dies."""
573+
log_name = 'queue_mapd.log'
574+
file = open(log_name, 'a')
575+
date = subprocess.getoutput('date')
576+
577+
if int(code) == 20:
578+
content = 'I am {0}, and have died because of bad pending job status, with {1} as the attempted status on {2}'.format(sys.argv[0], status, date)
579+
else:
580+
content = 'I am {0}, but I do not know how I got to the point of writing a log...'.format(sys.argv[0])
581+
582+
file.write(content)
583+
sys.exit(code)
584+
#^--------------------------------------------------------- write_log(info, code)
585+
568586
def show_usage():
569587
"""Method to display how to use this script on stdout"""
570588

setup.sh

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,4 +75,8 @@ echo ""
7575
echo "Once you know things are where they should be, make sure you configured the"
7676
echo "grab_queue_files.sh script for your info to grab the files."
7777
echo "If it is configured already, add this line to crontab -e:"
78+
echo "If you chose method(1) as described in README, add this to cron:"
7879
echo "*/2 * * * * $(pwd)/grab_queue_files.sh"
80+
echo ""
81+
echo "If you chose method(2), add this to cron:"
82+
echo "*/2 * * * * $(pwd)/curl_queue_files.sh"

templates/favicon.ico

104 Bytes
Binary file not shown.

templates/footer.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
</div>
22
<div id="footer">
3-
<p>v0.8-beta-2</p>
3+
<p>v0.8-beta-3</p>
44
</div>
55
</body>
66
</html>

templates/header.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
<title>CRC Queue Status</title>
55
<link rel="stylesheet" href="../styles.css">
66
<META HTTP-EQUIV="refresh" CONTENT="60">
7+
<link rel="shortcut icon" type="image/x-icon" href="../templates/favicon.ico" />
78
</head>
89
<body>
910
<div id="header">

templates/pending-header.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
<title>CRC Pending Jobs</title>
55
<link rel="stylesheet" href="../styles.css">
66
<META HTTP-EQUIV="refresh" CONTENT="60">
7+
<link rel="shortcut icon" type="image/x-icon" href="../templates/favicon.ico" />
78
</head>
89
<body>
910
<div id="header">

templates/sub-header.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
<title>CRC Node Status</title>
55
<link rel="stylesheet" href="../../styles.css">
66
<META HTTP-EQUIV="refresh" CONTENT="60">
7+
<link rel="shortcut icon" type="image/x-icon" href="../../templates/favicon.ico" />
78
</head>
89
<body>
910
<div id="header">

0 commit comments

Comments
 (0)