The program uses the autocorelation function, binned to a given resolution and computed up to a given lag as a cost function. See the paper for an introduction. An unevenly sample time series needs to provide two pieces of information, the actual value and the time it was sampled. Thus a two-column file is expected. The default is to have the values in the first and the times in the second column. It can be changed by spcifying -c with one or two arguments. Th output file always has the data first and then the times.
randomize_uneven_exp_random
-d# -D# [-W#]
[-n# -u# -I# -o outfile
-l# -x# -c#[,#] -V# -h -T# -a# -S# -s# -z# -C#]
file
-d time span of one bin
-D total time spanned
-W type of average: 0=max(c), 1=|c|/lag, 2=(c/lag)**2 (default 0)
-n number of surrogates (default 1)
-u improvement factor before write (default 0.9 = if 10% better)
-I seed for random numbers (0)
-l maximal number of points to be processed (default all)
-x number of values to be skipped (0)
-c columns to be read (measurements from 1, times from 2)
-o output file name, just -o means file_rnd(_nnn)
-V verbosity level (0 = only fatal errors)
-h show usage message
-T initial temperature (default: automatic melting)
-a cooling factor (default automatic)
-S total steps before cooling (default 20000)
-s successful steps before cooling (default 2000)
-z minimal successful steps (default 200)
-C goal value of cost function (default zero)verbosity level (add what you want):
1 = input/output
2 = current value of cost function upon printable improvement
4 = cost mismatch
8 = temperature etc. at cooling
16 = verbose cost if improved
32 = verbose cost mismatch
Note: if neither -a nor -C are given, the annealing will keep starting over with slower cooling rates. This may be necessary if good guesses are not available but of course, multiple surrogates will have to be made by further separate calls.