Version 1.6.2-51 | Writing Parallel ISIS Scripts | |||||||||||||||||||
|
ISIS provides a number of functions that enable users to write scripts that run in parallel. Such a script can create a number of slave processes that each performs an assigned task and then sends the result back to the master process. If necessary, the master process can then assign more work to each slave process, iterating until all the work is done. In very simple cases, one can get use parallel_map to get the advantage of multi-core parallelism without any additional work to manage communications between processes. Here's a trivial example to demonstrate how this works. Suppose you have a calculation that takes T seconds and needs to be performed for N different cases. To keep this example simple, the example subroutine will sleep for T seconds and then return an integer and a random number: define example (i,T) { sleep (T); return i, urand(); }Calling this subroutine interactively, we see: isis> (i,r) = example (0, 1); isis> i; 0 isis> r; 0.8147236863931789We can use array_map to execute this subroutine 10 times on one CPU: (i, r) = array_map (Int_Type, Double_Type, &example, [1:10], 1)Because the sleep time is 1 second, this array_map call takes 10 seconds. Using parallel_map: (i, r) = parallel_map (Int_Type, Double_Type, &example, [1:10], 1)doing the same amount of work takes only 4 seconds on a 4 CPU machine, corresponding to a factor of 2.5 speedup. Doing 100 iterations on the same hardware, the parallel_map speedup is a factor of 3.57, closer to the theoretical maximum speedup factor of 4. In some cases, it may be desirable to have customized communication between the master and slave processes. An example script that may provide a useful template for practical parallel calculations is available here. Here's a simple example that shows how the interprocess communication works: % Each slave process calls this subroutine and then exits. private define slave_task (s, k) { send_msg (s, SLAVE_RESULT); send_objs (s, "Hello", k); variable r = recv_objs (s); vmessage ("slave %d received: %s", k, r); return 0; } % This subroutine is called whenever the master process % receives a message from one of the slave processes. private define slave_handler (s, msg) { switch (msg.type) { case SLAVE_RESULT: variable objs = recv_objs (s); send_objs (s, sprintf ("Hi there %d!", objs[1])); } } % This is the main program, defining the master process. define isis_main () { variable s, slaves = new_slave_list (); variable k, num_slaves = _num_cpus(); _for k (0, num_slaves, 1) { s = fork_slave (&slave_task, k); append_slave (slaves, s); } manage_slaves (slaves, &slave_handler); } The main program, isis_main, starts several several slave processes by calling fork_slave in a loop. A list of the running slaves is accumulated in the data structure called slaves. After starting slave processes, the master process must communicate with the slaves until the last slave exits. In the call to manage_slaves, the master indicates that incoming messages will be handled by the slave_handler function. The slave_handler function will be called whenever the master process receives a message from one of the slaves. When a message of type SLAVE_RESULT is received, the slave_handler function receives the associated data, then replies by sending a string. Each slave process runs the slave_task function and then exits. In this example, the slave_task function sends a SLAVE_RESULT message to the master process, and then sends the master two objects, a string and an integer. The slave then waits for the master to reply with a single string. When the slave receives the reply, it prints a message and then exits. Running this example on a machine with 4 compute cores produces the following output:
> isis hello.sl slave 0 received: Hi there 0! slave 1 received: Hi there 1! slave 2 received: Hi there 2! slave 3 received: Hi there 3! For details, see the ISIS user's manual. |
[ Accessibility | Made with JED | Best Viewed with a Browser | Valid HTML 4.01 | Valid CSS ]
This page is maintained by John C. Houck. Last updated: Apr 3, 2022