Introduction

Task Spooler was originally developed by Lluis Batlle i Rossell but is no longer maintained. The branch introduced here is a fork of the original program with more features including GPU support.

Installation

First, you can clone Task Spooler from Github. Optionally, you can choose a different version by checking out another tag. In this tutorial, I will use the latest version on master.

%%capture
!git clone https://github.com/justanhduc/task-spooler

Next, you need to create a CUDA_HOME environment variable to point to the CUDA root directory. Then, you can execute the given install script.

!cd task-spooler/ && CUDA_HOME=/usr/local/cuda ./reinstall
rm -f *.o ts
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c main.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c server.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c server_start.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c client.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c msgdump.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c jobs.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c execute.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c msg.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c mail.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c error.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c signals.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c list.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c print.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c info.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c env.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -c tail.c
cc -pedantic -ansi -Wall -g -O0 -std=c11 -D_XOPEN_SOURCE=500 -D__STRICT_ANSI__ -L/usr/local/cuda/lib64 -I/usr/local/cuda/include -lpthread -c gpu.c
In file included from gpu.c:6:0:
/usr/local/cuda/include/nvml.h:6208:51: warning: ISO C restricts enumerator values to range of ‘int’ [-Wpedantic]
     NVML_VGPU_COMPATIBILITY_LIMIT_OTHER         = 0x80000000,    //!< Compatibility is limited by an undefined factor.
                                                   ^~~~~~~~~~
cc  -o ts main.o server.o server_start.o client.o msgdump.o jobs.o execute.o msg.o mail.o error.o signals.o list.o print.o info.o env.o tail.o gpu.o -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib64/stubs -I/usr/local/cuda/include -lpthread -lcudart -lcublas -fopenmp -lnvidia-ml
make: 'uninstall' is up to date.
install -c -d /usr/local/bin
install -c ts /usr/local/bin
install -c -d /usr/local/share/man/man1
install -c -m 644 ts.1 /usr/local/share/man/man1

Basics of Task Spooler

First look

!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=0/1]

The interface of Task Spooler can be seen like above by simply executing ts without argument. In the figure above, ID refers to job ID. There are four main types of State: running indicates that a job is currently running, queued that a CPU job is waiting to be executed, allocating is a queued GPU job, and running means the job is currently being executed. When a job is executed, the stdout stream is redirected to a file under the Output tab. These log files will never automatically deleted even after the job list is cleared. E-Level captures and displays the return error of a process. Time indicates the running time of a job. The running command is shown in the Command column. The numbers inside the square bracket next to Command specify the number of currently running jobs and the maximum jobs (slots) that can be run in parallel. For example, in the figure above, there is no running job and you can run at most one job in parallel, respectively. The maximum slot number can be adjusted manually.

Queuing your first job

Jobs can be added by simply appending ts in front of your command. For e.g., to run make the system sleep for 10 seconds using Task Spooler, execute

!ts sleep 10
!ts
!sleep 10  # lets check ts again after 10 seconds
!ts
0
ID   State      Output               E-Level  Time   GPUs  Command [run=1/1]
0    running    /tmp/ts-out.j0MGwO                   0     sleep 10
ID   State      Output               E-Level  Time   GPUs  Command [run=0/1]
0    finished   /tmp/ts-out.j0MGwO   0        10.00s 0     sleep 10

You can see that the first job with ID 0 is currently running, and the other job is being queued. After 10 seconds, the first job will finish with an E-Level of 0 and the second job will start.

To enable running more jobs in parallel, you can increase the maximum slot number by using a -S flag followed by the desired number. For instance,

!ts -S 4
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=0/4]
0    finished   /tmp/ts-out.j0MGwO   0        10.00s 0     sleep 10

The command above allows you to run 4 jobs at the same time. You can verify by typing ts and the last number in the square bracket should change to 4. Let's try queuing 5 jobs at once and this time we should increase the sleep time to 100 so that the job doesn't end too fast. You should be able to see something like this

!ts sleep 100
!ts sleep 20
!ts sleep 30
!ts sleep 40
!ts sleep 10
!ts
1
2
3
4
5
ID   State      Output               E-Level  Time   GPUs  Command [run=4/4]
1    running    /tmp/ts-out.xDq00e                   0     sleep 100
2    running    /tmp/ts-out.HUzUai                   0     sleep 20
3    running    /tmp/ts-out.sYcGno                   0     sleep 30
4    running    /tmp/ts-out.ArV4nv                   0     sleep 40
5    queued     (file)                               0     sleep 10
0    finished   /tmp/ts-out.j0MGwO   0        10.00s 0     sleep 10

Viewing command outputs

As mentioned above, the stdout of the command is redirected to a file specified in the Output column. To manually see the written output, you can simply look for that file. But of course Task Spooler is more than that. It lets you read the outputs contents in two ways via the flags -t and -c.

-c, which stands for cat, allows you to see all the output from the beginning to the end. -t, which means tail, displays only the last 10 lines of the output. Let's try them out. First, we can something that can produce a lot of texts, like ls, df or du. The choice is yours. For me, I ran ts ls /usr/bin. The job ID of the command in my case is 0 so to visualize the whole output, I used ts -c 0. It displayed a long list of excutable files. When I typed ts -t 0, it showed only the last 10 lines.

!ts -K  # reset Task Spooler. it will be introduced later
!ts ls /usr/bin
!ts -t 0

0
yes
zdump
zip
zipcloak
zipdetails
zipgrep
zipinfo
zipnote
zipsplit
zrun

%%capture

!ts -c 0

Miscs

There are many other flag options to manage your tasks. First of all, to see all the available options, use a -h options. Among these, the ones you probably will use most are -r, -C, -k, -T and -K. To remove a queued or finished job (with finished, queued or allocating status), use -r with optionally a job ID. For example, ts -r removes the last added job if it is not running yet. ts -r 10 removes the job with ID 10. If the job is successfully removed, it should disappear from the job list.

!ts -K
!ts -S 2  # lets run 2 tasks at a time
!ts sleep 100
!ts sleep 100
!ts sleep 100
!ts
0
1
2
ID   State      Output               E-Level  Time   GPUs  Command [run=2/2]
0    running    /tmp/ts-out.gClvpl                   0     sleep 100
1    running    /tmp/ts-out.rW9nIv                   0     sleep 100
2    queued     (file)                               0     sleep 100
!ts -r 2  # remove job 2
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=2/2]
0    running    /tmp/ts-out.gClvpl                   0     sleep 100
1    running    /tmp/ts-out.rW9nIv                   0     sleep 100

To kill a running job, use ts -k <jobid>.

!ts -k 0  # lets kill job 0
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=1/2]
1    running    /tmp/ts-out.rW9nIv                   0     sleep 100
0    finished   /tmp/ts-out.gClvpl   -1        8.07s 0     sleep 100
!ts -S 5
!ts sleep 100
!ts sleep 100
!ts sleep 100
!ts
3
4
5
ID   State      Output               E-Level  Time   GPUs  Command [run=4/5]
1    running    /tmp/ts-out.rW9nIv                   0     sleep 100
3    running    /tmp/ts-out.BeUKip                   0     sleep 100
4    running    /tmp/ts-out.uFu50z                   0     sleep 100
5    running    /tmp/ts-out.o0hd1F                   0     sleep 100
0    finished   /tmp/ts-out.gClvpl   -1        8.07s 0     sleep 100

To kill all running jobs, use ts -T.

!ts -T  # terminates all running jobs
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=0/5]
0    finished   /tmp/ts-out.gClvpl   -1        8.07s 0     sleep 100
1    finished   /tmp/ts-out.rW9nIv   -1       22.42s 0     sleep 100
5    finished   /tmp/ts-out.o0hd1F   -1        8.84s 0     sleep 100
3    finished   /tmp/ts-out.BeUKip   -1        9.06s 0     sleep 100
4    finished   /tmp/ts-out.uFu50z   -1        8.95s 0     sleep 100

To clear all the finished jobs from the list, use -C without argument.

!ts sleep 100
!ts -C  # clear job list
!ts
6
ID   State      Output               E-Level  Time   GPUs  Command [run=1/5]
6    running    /tmp/ts-out.bOY0Sx                   0     sleep 100

Finally, ts -K will kill the Task Spooler process.

!ts -K  # lets kill Task Spooler
!ts  # then restarts
ID   State      Output               E-Level  Time   GPUs  Command [run=0/1]

There are some useful flags when scheduling tasks as well. You may want to execute a task only after a certain job finishes. In this case you can use the flag -d with no argument to make your future task depend on the last added job, -D with a comma separated list of job IDs which are the IDs of the jobs that the to-be-run task depends on, and -W followed by a list of IDs, which states that the dependent job will run iff all the dependencies finish with exit code 0. For example,

!ts -S 10
# lets queue 3 jobs first
!ts sleep 100
!ts sleep 100
!ts sleep 200
!ts
0
1
2
ID   State      Output               E-Level  Time   GPUs  Command [run=3/10]
0    running    /tmp/ts-out.1wh18P                   0     sleep 100
1    running    /tmp/ts-out.aqr1P0                   0     sleep 100
2    running    /tmp/ts-out.SLCGX7                   0     sleep 200
!ts -d sleep 10  # does not care about exit code
!ts -D 0,1,3 sleep 10  # runs after jobs 0, 1 and 3
!ts -W 0,2,3 sleep 10  # to run this job, jobs 0, 2 and 3 need to finish well
!ts
3
4
5
ID   State      Output               E-Level  Time   GPUs  Command [run=3/10]
0    running    /tmp/ts-out.1wh18P                   0     sleep 100
1    running    /tmp/ts-out.aqr1P0                   0     sleep 100
2    running    /tmp/ts-out.SLCGX7                   0     sleep 200
3    queued     (file)                               0     [2]&& sleep 10
4    queued     (file)                               0     [0,1,3]&& sleep 10
5    queued     (file)                               0     [0,2,3]&& sleep 10
!ts -k 2
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=3/10]
0    running    /tmp/ts-out.1wh18P                   0     sleep 100
1    running    /tmp/ts-out.aqr1P0                   0     sleep 100
3    running    /tmp/ts-out.suaN1K                   0     [2]&& sleep 10
4    queued     (file)                               0     [0,1,3]&& sleep 10
5    queued     (file)                               0     [0,2,3]&& sleep 10
2    finished   /tmp/ts-out.SLCGX7   -1       10.35s 0     sleep 200
!sleep 100  # let's wait for jobs 0 and 1 to finish
!ts  # you will see that the job queued with `-W` will be skipped
ID   State      Output               E-Level  Time   GPUs  Command [run=0/10]
2    finished   /tmp/ts-out.SLCGX7   -1       10.35s 0     sleep 200
3    finished   /tmp/ts-out.suaN1K   0        10.00s 0     [2]&& sleep 10
0    finished   /tmp/ts-out.1wh18P   0         1.67m 0     sleep 100
5    skipped    (no output)                          0     [0,2,3]&& sleep 10
1    finished   /tmp/ts-out.aqr1P0   0         1.67m 0     sleep 100
4    finished   /tmp/ts-out.yV8vfT   0        10.00s 0     [0,1,3]&& sleep 10

To distinguish tasks, you can also label them using the -L flag.

!ts -L foo sleep 10
6
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=0/10]
2    finished   /tmp/ts-out.SLCGX7   -1       10.35s 0     sleep 200
3    finished   /tmp/ts-out.suaN1K   0        10.00s 0     [2]&& sleep 10
0    finished   /tmp/ts-out.1wh18P   0         1.67m 0     sleep 100
5    skipped    (no output)                          0     [0,2,7303014]&& sleep 10
1    finished   /tmp/ts-out.aqr1P0   0         1.67m 0     sleep 100
4    finished   /tmp/ts-out.yV8vfT   0        10.00s 0     [0,1,3]&& sleep 10
6    finished   /tmp/ts-out.EO9Qct   0        10.00s 0     [foo]sleep 10

GPU support

The GPUs column shows the number of GPUs that the task requires.

Before, when running CPU tasks, the number of parallel tasks is capped by the number of slots. For a GPU task, it is further restricted by the number of available GPUs. In other words, a GPU task can run only when there are enough both slots and GPUs. The availability of a GPU is determined by the free memory of that GPU. If more than 90% of the memory is available, the GPU is deemed to be free, and vice versa. If there are more free GPUs than required, the GPUs will be chosen auto-magically and randomly.

There is one thing to note here. Because the availability of a GPU is determined by its memory usage, and it may take time for your task to initialize the GPU memory, so if you run two tasks at the same time, they may use the same device and eventually may crash due to out-of-memory error. Therefore, in Task Spooler, I deliberately delay subsequent GPU tasks a short time (30 seconds by default) after a GPU task is just executed. This is ugly, but it does the job. You can change this delay time via the flag --set_gpu_wait followed by the number of seconds. That's why when you execute several jobs at once, you may find the tasks after the first one taking a long time to start execution. Also sometimes you may see the job status being changed to running but the task is not actually executed yet, and there is no output file. This is usual. Just keep waiting... It will be executed soon (or sometimes not very soon, but anw it will run)!

Now, to tell Task Spooler that your job requires GPU, use -G followed by the number of required GPUs. Task Spooler will allocate the GPU(s) for the job, and it will make your job see only the provided GPU(s) so your task won't mess with the others. For a stupid example, let's sleep with 1 GPU. In your terminal, execute

!ts -K
!ts -G 1 sleep 1
!ts
0
ID   State      Output               E-Level  Time   GPUs  Command [run=1/1]
0    running    /tmp/ts-out.N6RDHT                   1     sleep 1

If you demand more GPUs than available, however, it will queue the task even though there are enough slots.

!ts -G 100 sleep 1
!ts
1
ID   State      Output               E-Level  Time   GPUs  Command [run=0/1]
1    allocating (file)                               100   sleep 1
0    finished   /tmp/ts-out.N6RDHT   0         1.00s 1     sleep 1

In the figure, I demanded 100 GPUs even though the server has only 1, and hence the task has to be queued (in this case, forever).

We haven’t done anything useful yet. In the next section, let’s see how to manage your deep learning experiments using Task Spooler.

Deep learning with Task Spooler

Let's train a Convolutional Neural Network (CNN) on MNIST. For this example, I will use the official Pytorch MNIST example. To enable the code to use muti-GPU, you will have to manually add

model = nn.DataParallel(model)

after line 124 (optimizer = optim.Adadelta(model.parameters(), lr=args.lr)). You can download the script by executing the cell below.

%%capture
!wget https://open-source-codes.s3.amazonaws.com/mnist.py

To train the CNN with Task Spooler using 1 GPU, execute the script as usual in terminal but with ts -G 1 before python. The full command is

!ts -K
!ts -G 1 python mnist.py
!ts
0
ID   State      Output               E-Level  Time   GPUs  Command [run=1/1]
0    running    /tmp/ts-out.xwvuBP                   1     python mnist.py

Note that without the -G flag, the job will run on CPU instead.

To see the output, use the -c or -t flag. You should see the training in real-time. You can use ctrl+c to stop getting stdout anytime without actually canceling the experiment.

%%capture
!ts -t 0
!ts
ID   State      Output               E-Level  Time   GPUs  Command [run=1/1]
0    running    /tmp/ts-out.xwvuBP                   1     python mnist.py

Unfortunately, there is only 1 GPU available in Colab, so I can't demonstrate training with multiple GPUs. You will have to trust me that it works!

That's it folks. I hope this little app can boost your productivity and you will enjoy using it for not only your experiments but also your daily tasks. If you have any questions or want to contribute, feel free to create an issue or make a PR on the Github page.

About me

I am Duc Nguyen from Vietnam. Currently, I am a PhD candidate at Yonsei University, Korea. For more information about me, you guys can visit my website or contact me at this email.