Queuing ESE Tasks in a Loop
When using desktop ENVI and IDL, it is useful to setup processing that you have to do many times in a batch script. The easiest way to do this is with an IDL for loop that does processing on one file at a time. However, for an instance running ENVI Services Engine (ESE), it is best to call each task on an individual file. This makes the processing more robust, as an error in processing on a single file will not halt processing for every file. This leaves the question though - how would one loop over every file that needs processing?
One solution is to use IDL to create a loop that goes over each file, then launch every task that needs to be preformed. This builds up a list of tasks queued for execution. To do this, create a list of input files and output files much like you would in a batch process, then call the HTTP address required to submit the processing request to ESE one file at a time.
This can be done using IDL's built in HTTP client, IDLNetURL. As an example, if the ESE process to be called is an asynchronous task named "apply_color_table", the full HTTP call to start the task will be:
where (host) is the name or IP address of the server, and the keywords "file" are the actual input and output file names. One way to set up this call so that it occurs on multiple files is as follows, where inFiles is a variable containing all of the files to be processed.
oURL = Obj_New('IDLnetUrl')
oUrl.SetProperty, URL_HOST = !SERVER.HOSTNAME
foreach inFile, inFiles do begin
oUrl.SetProperty, URL_QUERY='inputFile=' + inFile + $
'&outputFile=' + 'ct_' + inFile
result = oURL.Get()
json = JSON_Parse(result)
print, 'status file: ' + json['jobStatusURL']
The task that is called will contain the processing, in this case the call that would be made using IDL would be:
apply_color_table, inputFile=inputFile, outputFile=outputFile
This procedure will take in the input file and output file names as arguments, which are passed in through the queuing script.
Once the queuing script completes, ESE will begin running through the tasks one at a time, distributing the workload across CPUs and across any workers that are set up.