Goal
Provide a very easy way to run one program with different settings on a bunch of computers in parallel and collect the results. Simple configuration and wide applicability is the aim. Fault tolerance in respect to network and adminstration errors.
Why not use something like MPI?
- Well, first of all the program I want to run are not witten in C.
- I could write a MPI client that executes my program with the parameters, but then I can just use a shell script.
- I need a more flexibel way to grasp the results. For example some output goes to a file. A shell or perl script seams to be more convinient to do a flexible conversion.
Terminology:
- Master: is the computer where the server runs
- Slave: is one of many computers that do the work
- Server: program that coordinates the process
- Client: program that runs on slaves
- Worker: program that does the computation (any)
- Task: a set of parameters/ settings for the Worker
- Result: a file with the results of the computation
- SessionID: unique number for client-server communication (should be unique over multiple runs)
- Ticket: Computation identification. Unique within one run.
Specification
Cluster
- One Master
- Many Slaves
- primary Linux machines, however the design is platform independent
- any TCP/IP network connection (can be non permanent, but should not :-))
- SSH (public key authentication) on Slave
- SCP sequrity copy (public key authentication) on Slave
- HTTP (any port open on Master, prefered #80)
Features
- one master computer with server program (HTTP server). SSH and SCP is needed to get the client program to the slaves and start it.
- list of slave computers ( host names or IPs). Every slave acts as an HTTP client.
- platform dependent workers possible
- command specification: commandline pattern with place holders for variables and input file generation
- result specification: standart output and/or files
- validation of the results (where on the master)
- list with tasks characterised through parameters.
- timeouts and multiply task assigments if necessary (timeout, free resources and so on)
- collecting rules: a) plain concat b) blockwise with parameters
- simple statistiks: which slave did what and which parameter sets failed.
- NFS aware
Error detection/ dealing
- error while connection/authentication (ssh, scp)
- slave dead / client killed (don't care there are other slaves :-) )
- server breaks or get stopped (all clients should terminate at some near point)
- worker terminates without success
- worker doesn't return within timeout
Server
- Format of communication is specified in Protocol section
- initialisation: for every slave: try to start client (ssh). If it fails: check ssh connection with dummy ssh command. Success -> copy client to slave using scp and try to start it.
- on http request for configuration: reply client configuration (i.e. worker name, MD5 checksum and what to do on exit)
- on http request for worker executeable: reply with the binary for the right platform
- on http request for new task: reply with the next command to execute and all parameters.
- on http request for statistics (normal website): reply statistics webpage
- on post: validate result, mark task as completed and collect results
- no more task to process: exit and display statistics.
Client
- Format of communication is specified in Protocol section
- gets via command line: Session ID, server URL and port
- register at the server and fetch configuration
- check for the worker: if not already at local filesystem or the MD5 checksum is wrong: fetch it (for own platform) from the server
- fetch a task
- run worker
- check return code: if failed -> Post failture otherwise take the results and post them.
- fetch next task
- die if there is no more task or the server is not responding.
- different settings for termination: delete executeable (if fetched), delete the client program, delete results?
Protocol
Configuration
- Request: GET http://master/config?sessionid=SESSIONID&platform=PLATFORM
- Fail (due to unsupported platform): 415 Unsupported Media Type
- Fail (due to wrong session id): 403 (Forbidden)
- Successful Reply: List of Key = Value pairs.
Worker=name of the executeable
MD5=md5 checksum of the executeable
DeleteWorker=Yes/No
DeleteClient=Yes/No
DeleteResults=Yes/No
Ping=#
- PLATFORM: one of "Linux, Unix, BSD, WinNT, Win95" (TODO: need a better way then $^0)
Ping (HTTP)
- Ping interval is given in seconds. 0 for no ping.
- Purpose of the ping is that the client realises if the server is stopped or finished or even dead.
- Request: GET http://master/ping?sessionid=SESSIONID&ticket=TICKET
- Fail due to wrong session id: 403 (Forbidden)
- Successful, but ticket expired (task already done): 205 (Reset Content)
- Successful (keep on it!): 204 (No Content)
Worker
- Request: GET http://master/worker?sessionid=SESSIONID
- Fail (due to wrong session id): 403 (Forbidden)
- Fail (due to file not found): 403 (Forbidden)
- Success: binary file
Task
- Request: GET http://master/task?sessionid=SESSIONID
- Fail (due to wrong session id): 403 (Forbidden)
- Fail (because no task left): 503 Service Unavailable
- Success:
[Task]
Ticket=# (unique number within session)
CommandLine=commandline
[Input filename]* (filename can be "STDIN")
Content=<<ENDOFCONTENT
real file content here (ASCII)
ENDOFCONTENT
[Result filename]+ (filename can be "STDOUT")
Name=resultname
- the * behinds the section means there can be _zero_ or more sections
- the + behinds the section means there can be _one_ or more sections
Task completed
- Successful: POST http://master/complete?sessionid=SESSIONID&ticket=TICKET
[Result]+
Name=resultname
Content=<<ENDOFCONTENT
file content here (ASCII)
ENDOFCONTENT
- Failed: GET http://master/failed?sessionid=SESSIONID&ticket=TICKET
- Reply Fail due to wrong session id: 403 (Forbidden)
- Reply Otherwise: 200 OK
- binary content is not supported
Implementation Details
Configuation and Files
- Server config: TODO describe
- Parameter file: csv file, cells seperated with |, parameter names in the headline and every following line contains one parameter set. All lines have to have the same amount of cells like the headline!
- Specification of worker: Command line: a perl syntax string with parameter Variables for the parameters; Input files: Name of the file and parameter name to write in.
- Result specification: A result consists of a list of name - value pairs. Where name specifies the name of the particular output and value decides where the output comes from. For example myoutput="stdout", myfileoutput="out.txt".
- Validation: standart implementations are provided and a custom implementation can be provided by the user as a perl function. A validation function gets the result of the worker and returns success or failture.
- Collection: standart implementations are provided and a custom implementation can be provided by the user as a perl function. A collection function gets task description (number, parameters set) and the result of the worker and can do whatever it wants with it (usually writes in a file).
NFS awareness
The problem is, that if some slaves share files via NFS or another network filesystem it could happen that
different clients overwrite their data. Basically there are three points where it occurs:
- the client is copied
- the client fetches the worker
- the worker writes its data to a file.
Solutions:
- a) start and copy clients in serial (very slow) b) copy just one client at time, but start in parallel (fast on NFS, slow otherwise)
- before fetching the worker the client creates a .lock file. The other clients check the existance and wait for the worker.
- every worker is started in a separate directory, given by the session id and the ticket number
Error detection
Remote shell command (ssh) termination code:
- 0 => Success: The executed command has been executed with success!
- otherswise => Failure: Can have the following reasons: connection failed, program not found or terminated without success.
To check a connection and the authentication:
- return 0 (success): Connection is OK and machine has a shell. (TODO: check for Windows and Mac machines)
- otherwise error