Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 Capacities
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

Task Farming

What is task farming?

Task farming is a technique allowing users to achieve high throughput of serial (single processor) tasks on large parallel computers. ICHEC provides a utility called taskfarm to do this. As ICHEC policies do not facilitate individual serial jobs users must use the taskfarm to run multiple serial processes in a single job.

How to access the taskfarm

The taskfarm utility is not in your default PATH but can be loaded when required using an environment module, this command can be entered before submission if the PBS script uses the -V directive to propagate environment settings or it can be added to the job submission script itself.

module load apps taskfarm/2.6

What is a task?

In this context a task is a single command or a group of consecutively executed commands. An example of a group of commands as a task:

cd dir01; ../my_executable input_file > output_file

This simple task changes to a specific subdirectory and runs an executable from the parent directory with an input file. The output from the executable is redirected to a file. It's important that all the components of the tasks are separated by semicolons and are all on the one line. A more complex tasks involving multiple different steps can be created. A more complex example involving extra commands:

cd dir01; pwd; date; ../my_executable input_file > output_file; ../my_postprocessor output_file > processed_output_file; date

A taskfarm example

This is an example of using taskfarm running on 24 cores on Fionn. For the purpose of the example we will assume that we have a directory that contains the following:

  • a PBS script to run the taskfarm
  • the taskfarm input file
  • the executable(s) being run in the tasks
  • a subdirectory for each task containing its input file (task01, task02, ... )

First the PBS script (called tasks.pbs in this example). It runs a job on 24 cores, as found on Fionn, for up to one hour. For more information on batch processing see here and here.

#PBS -N Taskfarm_job_name
#PBS -A projectname
#PBS -r n
#PBS -j oe
#PBS -m bea
#PBS -M me@my_email.ie
#PBS -l nodes=1:ppn=24,walltime=1:00:00

# This job's working directory

module load apps taskfarm/2.6
taskfarm tasks

Next the taskfarm input file (called tasks in this example). Here 26 tasks are included. When the taskfarm first starts the first 24 tasks will be started then each time a task exits the next available task will be started. As Fionn has 24 cores per node it will initially start 24 tasks.

cd task01; ../my_executable input_file > output_file
cd task02; ../my_executable input_file > output_file
cd task03; ../my_executable input_file > output_file
cd task04; ../my_executable input_file > output_file
cd task05; ../my_executable input_file > output_file
cd task06; ../my_executable input_file > output_file
cd task07; ../my_executable input_file > output_file
cd task08; ../my_executable input_file > output_file
cd task09; ../my_executable input_file > output_file
cd task10; ../my_executable input_file > output_file
cd task11; ../my_executable input_file > output_file
cd task12; ../my_executable input_file > output_file
cd task13; ../my_executable input_file > output_file
cd task14; ../my_executable input_file > output_file
cd task15; ../my_executable input_file > output_file
cd task16; ../my_executable input_file > output_file
cd task17; ../my_executable input_file > output_file
cd task18; ../my_executable input_file > output_file
cd task19; ../my_executable input_file > output_file
cd task20; ../my_executable input_file > output_file
cd task21; ../my_executable input_file > output_file
cd task22; ../my_executable input_file > output_file
cd task23; ../my_executable input_file > output_file
cd task24; ../my_executable input_file > output_file
cd task25; ../my_executable input_file > output_file
cd task26; ../my_executable input_file > output_file

Finally submit the job.

qsub tasks.pbs

Additional taskfarm options

With taskfarm the following options are available:

  • TASKFARM_PPN, if this environment variable is set to an integer value between 1 and 8 for Stoney and 1 and 24 for Fionn the taskfarm utility will use only that number of processors on each node allocated to it. This can be very useful if you need to limit the number of tasks run on a node at once because of memory requirements. By default 8 processors are used on Stoney and 24 on Stokes.
  • TASKFARM_SILENT, if this environment variable is set then the taskfarm utility will not produce output, however tasks may still produce their own output.
  • %TASKFARM_TASKNUM%, if the proceeding string is used in a task line the utility will replace it with the number of the task, starting from zero for the first task. This can be used to create files or directories that are named based on the tasknumber for instance.
  • TASKFARM_SMT, if this environment variable is set then the taskfarm utility will allocate as many tasks per core as threads it supports in SMT mode. This option can be useful if tasks are heavy on IO but light on computation and thus each core can sustain multiple threads.

Efficiency considerations

  • The minimum number of tasks specified should be the number of CPUs requested for the batch job.
  • The tasks (or series of sub-tasks) to be executed within the task farm should preferably have no dependencies on each other.
  • It is important to be careful that the tasks are reasonably well balanced. If one task has a substantially longer runtime it may stay running long after the others have completed wasting a lot of CPU time. Users who have tasks with random runtimes are best advised to put a large number of tasks in their input file and run for a longer period as this will yield a higher average efficiency albeit with some killed tasks to be cleaned up when the walltime expires.