# QSoas command reference

Here is the command reference of QSoas, which list all the commands, what they do, and how to use them.

To get a quick introduction at QSoas, you may look at the tutorial, or look at the list of Frequently Asked Questions.

# QSoas datasets

The basic unit to manipulate data in QSoas is the dataset (they are sometimes also called “buffer”, from the terminology used in SOAS). A dataset is a large table of number. First first column contains the X values, and the following columns are Y values. A dataset can have many columns. QSoas plots a dataset by showing the first y column as a function of the x column.

You can use the edit command to see and edit the contents of the table.

In addition to the raw numbers, a QSoas dataset contains the following information:

• A name, which is originally the name of the loaded file. It is modified with each command applied to the dataset.
• A series of meta data, which are just named informations. It can be numbers, text, dates, even lists.
• Perpendicular coordinates, one for each Y column. They are used when the dataset can also be seen as a series of $y = f(x,z)$, with a different $z$ for each Y column.
• A series of flags, which can be used to retrieve them from the stack. Unlike the other attributes, flags are not kept when the dataset is modified.
• Possible names for rows and columns. These can be manipulated using set-column-names and set-row-names; see below.

## Different ways to interpret datasets

Datasets are really just a collection of columns with numbers, but it is possible to give them different significations.

• The most common interpretation is just a series of $y = f(x)$ columns, with several values of $y$: y, y2, and so on.
• If perpendicular coordinates are specified, then it is possible to view these columns as a table of $y = f(x,z)$ values, with $x$ corresponding to the first column, and a value of $z$ for each “y” column. It is possible to treat data this way using for instance the /mode=xyz of apply-formula, or draw contour lines using the contour command.
• It is possible to just treat the columns as a series of matching numbers, one per row. In that case, using column names can greatly help. The command tweak-columns is of great help to manipulate columns of a dataset containing several (or even many) columns.

## Row and column names

QSoas can store names for columns and rows. As mentioned above, they can be set using set-column-names and set-row-names.

Column names are used in particular to designate columns, either: * in formulas (using the $c.column syntax); * in column specifications, using either $c.column or named:column.

Column names are visible in edit and show

Row names are only visible in edit. As of now, their use is relatively limited, but they become column names upon using transpose, and you can save them by specifying /row-names=true to the save command.

QSoas can read the column and row names from a file, generally seamlessly for column names, but for row names, most of the times, you have to let it know which columns contain the row names by using the /text-column option of load.

Learning to handle column names is particularly useful when working with exported fit parameters.

## Not a number

The table cannot contain text. When QSoas reads a file and is not able to make a number from what it reads, it uses a special numeric value called nan (Not A Number). They can be useful, but they “pollute” numbers: any operation that involves a nan will also have nan as a result. This means in particular that it is not possible to fit a dataset that contains a nan, or determine an average value and so on…

nan values are displayed as a big cross joining the two points between which they are located.

To get rid of points that have either X or Y values that are nan, use the following:

QSoas> strip-if x.nan?||y.nan?

# Commands, arguments and options (how to read this document)

QSoas works by entering commands inside the command prompt, or alternatively using the menus.

Most commands have arguments and options. Arguments and options are separated by spaces:

QSoas> command argument1 argument2 "argument 3" /option=option /option2="with spaces"

If you need to pass arguments or option values that have spaces, make sure you quote them using " or ', like in the above example. The = sign for the options can be replaced by a space, so that the command above could also have been run thus:

QSoas> command argument1 argument2 "argument 3" /option option /option2 "with spaces"

Arguments are italicized in the documentation below. You need to provide all the arguments for a command to work, and if you don’t, QSoas will prompt for them. Some arguments are followed by , which means that you can pass several space-separated arguments. This is the case for load, for instance:

QSoas> load file1 file2 file3

The order of the arguments must absolutely be respected. On the other hand, the options can come at any place in the command line. For instance, the two following commands are equivalent:

QSoas> load file.dat /columns=2,3
QSoas> load /columns=2,3 file.dat

### Default option

Some options are marked as “(default option)”, which means that, if all arguments of the command are already specified, you can omit the /option= part of the option. For instance, to set the temperature to 300 K, you should be doing that:

QSoas> temperature /set=300

But, as /set is the default option, you can omit the /set= and write:

QSoas> temperature 300

In this documentation, all options and arguments have mouseover texts that give a short explanation of what kind of values are expected.

Some commands can be used through a short name (like q for quit), indicated as such in the present documentation.

Some commands are marked as (interactive). This means that their use requires user input. If they are used in a script, the script pauses for user interaction.

All the commands that can be run from the command line are also available from within the menus. Running the command through the menu gives a dialog box in which one must choose the arguments of the command, and one can also select the options.

This can be a good way to discover what commands are available, and what they do.

Many commands of QSoas make use of “plain text files”, i.e. files that simply contain unformatted text. These are for instance:

On windows, use Notepad to edit them. On Linux, pico, nano, vi or emacs are pretty good choices. On MacOS, use TextEdit, but make sure you hit Cmd+Shift+T to switch to “plain text” format; the default is rich text (i.e. text with formatting informations) in the RTF format, and QSoas does not understand RTF.

### “inline” text files

Starting from QSoas version 3.1, it is possible to “define” the contents of text files directly inside a script file, by using a special ## INLINE: file name## INLINE END block. The text between this two blocks become accessible as a special file called inline:file name. Try for instance running the following script:

## INLINE: data.dat
1 2
2 5
3 9
## INLINE END
load inline:data.dat

This is very useful in particular in combination with run-for-each or run-for-datasets to define “subroutines” that are maintained in the same file as the main one.

## Dataset lists (or buffer lists) arguments

Many commands, such as flag, contract and others take lists of datasets as arguments. This list can take several forms:

• A comma-separated list of dataset numbers (the ones given by show-stack), such as: 1,4,7 (0 is the current dataset, 1, the one just before, which you can reach using undo, etc.).
• Negative numbers refer to the “redo” stack: -1 is the dataset you would get by running redo
• A number range, such as 1..7, meaning all datasets from 1 to 7 included.
• A number range with a step, such as 1..7:3, meaning 1,4,7.
• all for all datasets on the stack.
• displayed for the currently displayed datasets.
• latest for the datasets produced by the last command (running a script counts as many commands); this can be different from 0 if the last command produced more than one dataset, or none.
• latest:1 is the same as latest, latest:2 represents the datasets produced by the command before the last one, etc…

It is also possible to make use of dataset flags set by flag:

• flagged stands for all flagged datasets (regardless of the name of the flag);
• unflagged for all datasets that don’t have any flag;
• flagged- and unflagged- do the same, but with the datasets in the reverse order;
• flagged:flagname for all datasets that have the flag flagname;
• unflagged:flagname for all datasets that don’t have the flag flagname;
• and the variants flagged-:flagname and unflagged-:flagname for the reversed order.

Finally, it is also possible to specify datasets by their name, using the named: prefix. For instance, named:generated.dat refers to all the datasets whose name is generated.dat.

Note in this documentation, the terms “buffer” and “dataset” are synonyms.

## Dataset columns

Some commands such as bin or dataset-options take dataset column names (or numbers) as arguments or options. There are three way to designate those:

• using a number: 1 is the $x$ column, 2 is the $y$ column, and so on
• using a number prefixed by #: this is a 0-based index, #0 is then the $x$ column
• by its name: x, y, z, y2, y3 and so on. y2 is equivalent to z
• no or none when you don’t want to specify a number at all, such as for disabling the display of error bars with dataset-options.

Some commands (like contract) take column lists, which are comma-separated lists of columns (just like above), with the addition of ranges: 2..6 are columns 2 to 6 inclusive.

## Regular expressions

Some commands, notably load and the related commands, make use of “regular expressions”. Regular expressions are a way to describe how a text looks like, such as “numbers”, “white spaces”, “anything that looks like a date”, etc. Here is how it works:

• A simple text just matches itself. For instance, using /separator=, for load-as-text means that the columns are separated by commas.
• {blank-line} matches a fully blank line.
• {blank} matches a series of blanks. This is the default separator for load-as-text.
• {text-line} matches a line that does not start by numbers (ignoring spaces).
• /regex/, which is taken as a Qt regular expression. For instance, /[;,]/ means “either ; or ,”. Please see the Qt documentation for more information.

## Commands producing several datasets

Many commands in QSoas will produce several datasets, for instance load, that loads several files at the same time, or split-monotonic, that splits a dataset into its monotonic parts. All these commands share a set of options:

• /style that can be used to display all the curves with gradual changes in color, use for instance /style=red-to-blue or /style=brown-green (there is automatic completion on this);
• /flags, that can be used to set flags to the newly generated datasets, see the flag command for more information.
• /set-meta, that can be used to set meta-data to the newly generated datasets, using a key=value syntax (so you have two = signs in row). This option can be used several times to add several meta-data;
• /reversed, which can be used to reverse the order in which the datasets are pushed to the stack. Useful for instance to get the result of sim- commands in the same order as the original datasets.

For instance, try out:

QSoas> generate-dataset -1 1 /style=brown-green sin((10+number)*x) /number=11
QSoas> generate-dataset -1 1 /set-meta=a=2 /set-meta=b=3 

# General purpose commands

### quit – Quit

quit

Other name: q

Exits QSoas, losing the current session. The full log of the session is always available in the soas.log file created in the initial directory. This is indicated at startup in the terminal.

To avoid accumulating very large log files, the log file gets renamed as soas.log.1 when you start QSoas (and the older one as soas.log.2, and so on until soas.log.5).

If you want to save the entire state of QSoas before quitting so you can restart exactly from where you left, use save-stack.

### credits – Credits

credits /full=yes-no

• /full=yes-no: Full text of the licenses – values: a boolean: yes, on, true or no, off, false

This command displays credits, copyright and license information of QSoas and all the dependencies linked to or built in your version. You’ll get the full license text with /full=true.

It also lists publications whose findings/equations/algorithms were directly used in QSoas.

### version – Version

version /dump-sysinfo=yes-no /show-features=yes-no

• /dump-sysinfo=yes-no: If true, writes system specific information to standard output – values: a boolean: yes, on, true or no, off, false
• /show-features=yes-no: If true, show detailed informations about the capacities of QSoas (defaults to false) – values: a boolean: yes, on, true or no, off, false

Prints the version number of QSoas, including various build information.

If the option /show-features=true, then the output is much longer, and contains a list of all the features built in QSoas, including the fit engines, the available statistics, the time-dependent parameters and so on.

### save-history – Save history

save-history file /overwrite=yes-no

• file: Output file – values: name of a file
• /overwrite=yes-no: If true, overwrite without prompting – values: a boolean: yes, on, true or no, off, false

Saves all the commands that were launched since the beginning of the session, to the given (text) file.

This can be used for saving a series of command that should be applied repetitively as a script.

### files-browser – Browse files

files-browser (interactive)

This command starts a file browser, which makes it easy to figure out what files are present, which are the meta-data associated to them, and what kind of backend will be used to load them.

The browser makes it very easy to edit the values of the meta-data, as they are displayed each in their own column and are editable. Copy/paste from an external spreadsheet is supported.

### cd – Change directory

cd directory /from-home=yes-no /from-script=yes-no

Other name: G

• directory: New directory – values: name of a directory
• /from-home=yes-no: If on, relative from the home directory – values: a boolean: yes, on, true or no, off, false
• /from-script=yes-no: If on, cd relative from the current script directory – values: a boolean: yes, on, true or no, off, false

Changes the current working directory. If /from-home is specified, the directory is assumed to be relative to the user’s home directory. If /from-script is specified, the directory is assumed to be relative to that of the command file currently being executed by a run command (or in a startup script).

### pwd – Working directory

pwd

Prints the full path of the current directory.

It is also indicated in the title of the QSoas window.

### head – Head

head file /number=integer /skip=integer

• file: name of the file to show – values: name of a file
• /number=integer: number of lines to show – values: an integer
• /skip=integer: number of lines to skip – values: an integer

This commands prints the first few lines of the given file to the terminal. This is useful to quickly see the contents of a file, and to see how QSoas is able to read it.

The number of lines being printed is chosen using the /number= option (negative means print everything).

A number of lines can be skipped at the beginning using the /skip= option.

### ls – List files

ls (/directory=)directory

• (/directory=)directory (default option): Directory to list – values: name of a directory

ls lists the files in the current directory, just like the standard Unix command.

### temperature – Temperature

temperature (/set=)number

Other name: T

• (/set=)number (default option): Sets the temperature – values: a floating-point number

Shows or sets the current temperature, in Kelvins. The temperature is used in many places, mostly in fits to serve as the initial value for the temperature parameter. To set the temperature, pass its new value using the /set option (the /set= part is optional):

QSoas> temperature 310

### commands – Commands

commands

List all available commands, with a short help text. This also includes used-defined commands, such as custom fits loaded from a fit file and aliases.

### help – Help on…

help (/command=)command /dump=yes-no /location=text /synopsis=yes-no

Other name: ?

• (/command=)command (default option): The command on which to give help – values: the name of one of QSoas’s commands
• /dump=yes-no: Shows information about the contents of the help files – values: a boolean: yes, on, true or no, off, false
• /location=text: Shows the given URL location in the documentation – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /synopsis=yes-no: Does not show the help, but print a brief synopsis – values: a boolean: yes, on, true or no, off, false

Gives all help available on the given command. It shows the inline (HTML) documentation.

If you have doubts whether the documentation is up-to-date, you can use the /synopsis=true option to have a brief text description of the command together with its arguments and options. By construction, this small text is always up-to-date.

If you don’t know what the /location option does, you don’t need it.

### tips – Tips

tips /show-at-startup=yes-no

• /show-at-startup=yes-no: – values: a boolean: yes, on, true or no, off, false

Without any options, it shows the “startup tips” window. With the /show-at-startup option, you can control whether the tips will show at startup in the next run of QSoas or not.

### save-output – Save output

save-output file /overwrite=yes-no

• file: Output file – values: name of a file
• /overwrite=yes-no: If true, overwrite without prompting – values: a boolean: yes, on, true or no, off, false

Save all text in the terminal to a plain text file. Equivalent to copy-pasting the contents of the terminal to a plain text file using a text editor.

### print – Print

print (/file=)file /nominal-height=integer /overwrite=yes-no /page-size=text /title=text

Other name: p

• (/file=)file (default option): Save as file – values: name of a file
• /nominal-height=integer: Correspondance of the height of the page in terms of points – values: an integer
• /overwrite=yes-no: If true, overwrite without prompting – values: a boolean: yes, on, true or no, off, false
• /page-size=text: Sets the page size, like 9×6 for 9cm by 6cm – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /title=text: Sets the title of the page as printed – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Prints the current view, providing a usual print dialog. If you just want a PDF or PostScript file, just provide the file name as the /file option.

An optional title can be added using the /title option.

You can also use a .svg extension if you want to produce a SVG file that can later be modified, by, e.g. Inkscape.

Important note: QSoas is not a data plotting system, it is a data analysis program. Don’t expect miraculous plots !

### define-alias – Define alias

define-alias alias command /*=text

• alias: The name to give to the new alias – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• command: The command to give an alias for – values: the name of one of QSoas’s commands
• /*=text: All options – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

The define-alias commands allows one to defined a shortcut for a command one uses often with the same options. For instance, running:

QSoas> define-alias fit-2exp fit-exponential-decay /exponentials=2 /loss=true

creates a fit-2exp command that is equivalent to starting fit-exponential-decay with two exponentials by default and film loss on.

Alias can only be used to provide default values for options. It cannot provide default values for arguments.

### display-aliases – Display aliases

display-aliases

Shows a list of all the currently defined aliases.

### graphics-settings – Graphics settings

graphics-settings /antialias=yes-no /line-width=number /opengl=yes-no

• /antialias=yes-no: Turns on/off the use of antialised graphics – values: a boolean: yes, on, true or no, off, false
• /line-width=number: Sets the base line width for all lines/curves – values: a floating-point number
• /opengl=yes-no: Turns on/off the use of OpenGL acceleration – values: a boolean: yes, on, true or no, off, false

Gives the possibility to tweak a few settings concerning display. The settings are kept from one QSoas session to the next.

Turning on antialias (with /antialias=true) will make QSoas use antialiased drawings, which looks admittedly nicer, but requires much more computation time, to the point that drawing jagged curves may become particularly slow. Printing or exporting to PDF files through print always produces antialiased graphics, regardless of this option.

If you experience performance problems for displaying curves, use /opengl=true, as this will instruct QSoas to use hardware acceleration to display curves. It is off by default as some setups do not really benefit from that, and the OpenGL support is sometimes buggy and may result in crashes.

### ruby-run – Ruby load

ruby-run file

• file: Ruby file to load – values: name of a file

This command loads and executes a Ruby file. For the time being, the main interest of this command is to define complex functions in a separate file.

Imagine you have a file function.rb containing the text:

def mm(x,vmax,km)
return vmax/(1 + km/x)
end

After running

QSoas> ruby-run function.rb

You can use mm like any normal function for fitting:

QSoas> fit-arb mm(x,vmax,km)

or use it in eval:

QSoas> eval mm(1.0,2.0,3.0)
=> 0.5

You can find out more about ruby code below, but here is how one can define a function my_exp that is 0 before t0 and follows an exponential relaxation starting at val with a time constant tau afterwards:

def my_exp(t,t0,tau,val)
if t < t0
return 0
else
return val*exp(-(t-t0)/tau)
end
end

### break – Break

break

Exits from the current script. Has no effect if not inside a script.

### debug – Debug

debug (/directory=)directory /level=integer

• (/directory=)directory (default option): Directory in which the debug output is saved – values: name of a directory
• /level=integer: Sets the debug level – values: an integer

With this command, it is possible to collect a large amount of debugging information. You will essentially only need this to send information to the QSoas developers to help them track down problems.

The command:

QSoas> debug directory

sets up the automatic debug output in the directory directory.

The /level option correspond to the debug level. It defaults to 1, the higher this number the more detailed the output will be.

### system – System

system command… /shell=yes-no /timeout=integer

• command…: Arguments of the command – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /shell=yes-no: use shell (on by default on Linux/Mac, off in windows) – values: a boolean: yes, on, true or no, off, false
• /timeout=integer: timeout (in milliseconds) – values: an integer

The system command can be used to run external commands from QSoas. The output of the commands will be displayed in the terminal.

For the duration of the external command, QSoas will not respond to keyboard and mouse.

If /use-shell is on (the default on Linux and Mac, but off in Windows), the command will be processed by the shell before being run.

If a strictly positive /timeout is specified, the command will be killed if it takes longer than the timeout to execute.

### timer – Timer

timer /name=text

• /name=text: name for the timer – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

The first call starts a timer, and the second one stops it, showing the amount of time that has elapsed since the previous call to timer. This can be used to benchmark costly computations, for instance.

### mem – Memory

mem /cached-files=integer

• /cached-files=integer: – values: an integer

Displays information about the resource use of QSoas, including memory use, the number of cached files and the total CPU time used so far. The size of the file cache can be changed using the /cached-files option.

## Output file manipulation

Several commands (e.g. various data analysis commands and the fit commands) write data to the output file.

By default, the first time the output file is used, a output.dat file is created in the current directory. Another file can be used by providing its name to the output command.

### output – Change output file

output (/file=)file /meta=words /overwrite=yes-no /reopen=yes-no

• (/file=)file (default option): name of the new output file – values: name of a file
• /meta=words: when writing to output file, also prints the listed meta-data – values: several words, separated by ‘,’
• /overwrite=yes-no: if on, overwrites the file instead of appending (default: false) – values: a boolean: yes, on, true or no, off, false
• /reopen=yes-no: if on, forces reopening the file (default: false) – values: a boolean: yes, on, true or no, off, false

This command has several modes of operations. If file is provided (it is the default option, so you can omit /file=), then it opens file as the new output file. By default, if the file exists, new data are appended, and the old data are left untouched. You can force overwriting by specifiying /overwrite=true.

In the other mode, when only the /meta option is provided, it sets the list of meta-data that will automatically be added to the output file when outputting any data there. It is a comma-separated list of meta names. See more about meta-data there.

It is a bad idea to modify the output file while QSoas is still using it, as it messes up what QSoas think is in the output file. If you forgot you were using the output file and modified it, you can avoid problems by running:

QSoas> output /reopen=true

### comment – Write line to output

comment comment

• comment: Comment line added to output file – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Writes the given line comment to the output file. Don’t forget to quote if you need to include spaces:

QSoas> comment 'Switching to sample 2'

The main command for loading data is load.

### load – Load

load file… /auto-split=yes-no /columns=integers /comments=pattern /decimal=text /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-cache=yes-no /ignore-empty=yes-no /reversed=yes-no /separator=pattern /set-meta=meta-data /skip=integer /style=style /text-columns=integers /yerrors=column

Other name: l

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /auto-split=yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean: yes, on, true or no, off, false
• /columns=integers: columns loaded from the file – values: a comma-separated list of integers
• /comments=pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters
• /decimal=text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-cache=yes-no: if on, ignores cache (default off) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /separator=pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /skip=integer: skip that many lines at beginning – values: an integer
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /text-columns=integers: text columns – values: a comma-separated list of integers
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

Loads the given files and pushes them onto the data stack. QSoas features several backends for loading files (“backends” are roughly equivalent to “file formats”). In principle, QSoas is smart enough to figure out which one is correct, but you can force the use of a given backend by using the appropriate load-as- command. Using a backend directly also provides more control on the way files are loaded (this can also be done via the numerous options to load, which are forwarded to the appropriate backend). Currently available backends:

Look in their documentation for more information. In particular, the options /separator=, /decimal=, /skip=, /comments=, /columns= and /auto-split are documented in the load-as-text command.

QSoas tells you which backend it used for loading a given file:

QSoas> load 03.dat
Loading file: './03.dat' using backend text

The command load caches the loaded file. If for some reason, the cache gets in the way, use the direct load-as- commands, or alternatively use /ignore-cache=true.

load, like all the other commands that take several files as arguments, understand unix-like wildcards:

QSoas> load *.dat

This command loads all the files ending by .dat files from the current directory.

QSoas> load [0-4]*.dat

One can also set various dataset options while loading with load (and the load-as- commands), using the options /yerrors= and /histogram=. See the dataset-options, command for more information

The /style= option sets the color style when loading several curves:

QSoas> load *.dat /style=red-blue

This loads all the .dat files in the current directory, and displays them with a color gradient from red (for the first loaded file) to blue (for the last loaded file).

With the /flags= option, on can flag datasets as they get loaded. Using it has the same effect as running flag with the same option on loaded datasets.

The load command also provides dataset selection rules through the /for-which, option, more about that in the dedicated section.

By default, the load and related commands will not create a dataset if it were empty (i.e. a valid data file containing no data), you can force the creation of empty files using /ignore-empty=false.

Finally, it is possible to provide a number of datasets that should be loaded with the /expected= option. The command fails if the number of loaded datasets does not match the number given. This can be useful for scripts, to abort the script when a file is missing, see run to make use of this.

### load-as-text – Load files with backend ‘text’

load-as-text file… /auto-split=yes-no /columns=integers /comments=pattern /decimal=text /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-empty=yes-no /reversed=yes-no /separator=pattern /set-meta=meta-data /skip=integer /style=style /text-columns=integers /yerrors=column

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /auto-split=yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean: yes, on, true or no, off, false
• /columns=integers: columns loaded from the file – values: a comma-separated list of integers
• /comments=pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters
• /decimal=text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /separator=pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /skip=integer: skip that many lines at beginning – values: an integer
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /text-columns=integers: text columns – values: a comma-separated list of integers
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

Loads files using the backend text, bypassing cache and automatic backend detection. text recognizes space-separated data (which includes tab-separated data). Most “plain text” files will be read correctly by this backend. By default, it loads all the columns of the file, but only displays the second as a function of the first. If you want to work on other columns, have a look at expand. Alternatively, you can specify the columns to load using the /columns option, see below.

Apart from the options of dataset-options and the /style and /flags options documented in the load command, the text backend accepts several options controlling the way the text files are interpreted:

• /separator specifies the text that separates the columns (blank spaces by default). You can use regular expressions.
• /decimal specifies the decimal separator for loading (default is the dot). This is for loading only.
• /comments specifies a regular expression describing comment lines (ie lines that get ignored). By default, line that don’t start by a number are ignored.
• Give to /skip a number of text lines that should be ignored at the beginning of the text file.
• If /auto-split is true, then QSoas will create a new dataset everytime it hits a series of blank lines in the file.
• /columns is a series of numbers saying in which order the file columns will be used to make a dataset. For instance, /columns=2,1 will swap X and Y at load time.
• /text-columns designates columns in the file that will be interpreted as “text”, that is, row names. 1 is the first column.

### load-as-csv – Load files with backend ‘csv’

load-as-csv file… /auto-split=yes-no /columns=integers /comments=pattern /decimal=text /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-empty=yes-no /reversed=yes-no /separator=pattern /set-meta=meta-data /skip=integer /style=style /text-columns=integers /yerrors=column

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /auto-split=yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean: yes, on, true or no, off, false
• /columns=integers: columns loaded from the file – values: a comma-separated list of integers
• /comments=pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters
• /decimal=text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /separator=pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /skip=integer: skip that many lines at beginning – values: an integer
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /text-columns=integers: text columns – values: a comma-separated list of integers
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

The csv backend is essentially the same backend as the text one, but with the separators set by default to commas and semicolons, to parse CSV files. Hence, the options have the same meaning as for load-as-text.

### load-as-chi-txt – Load files with backend ‘chi-txt’

load-as-chi-txt file… /auto-split=yes-no /columns=integers /comments=pattern /decimal=text /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-empty=yes-no /reversed=yes-no /separator=pattern /set-meta=meta-data /skip=integer /style=style /text-columns=integers /yerrors=column

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /auto-split=yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean: yes, on, true or no, off, false
• /columns=integers: columns loaded from the file – values: a comma-separated list of integers
• /comments=pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters
• /decimal=text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /separator=pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /skip=integer: skip that many lines at beginning – values: an integer
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /text-columns=integers: text columns – values: a comma-separated list of integers
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

This is a slightly modified version of load-as-text that handles better text files from CH Instruments (and is in particular able to detect at least some of their meta-data).

### load-as-eclab-ascii – Load files with backend ‘eclab-ascii’

load-as-eclab-ascii file… /auto-split=yes-no /columns=integers /comments=pattern /decimal=text /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-empty=yes-no /reversed=yes-no /separator=pattern /set-meta=meta-data /skip=integer /style=style /text-columns=integers /yerrors=column

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /auto-split=yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean: yes, on, true or no, off, false
• /columns=integers: columns loaded from the file – values: a comma-separated list of integers
• /comments=pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters
• /decimal=text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /separator=pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /skip=integer: skip that many lines at beginning – values: an integer
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /text-columns=integers: text columns – values: a comma-separated list of integers
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

This is a slightly modified version of load-as-text that handles better ASCII files exported from Biologic potentiostats.

### load-as-parameters – Load files with backend ‘parameters’

load-as-parameters file… /expected=integer /flags=flags /for-which=code /histogram=yes-no /ignore-empty=yes-no /reversed=yes-no /set-meta=meta-data /style=style /yerrors=column

• file…: the files to load – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /expected=integer: Expected number of loaded datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Select on formula – values: a piece of Ruby code
• /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /ignore-empty=yes-no: if on, skips empty files (default on) – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’

QSoas can also load the parameters from a “Save Parameters” file. The parameterse end up one per column, as a function of the perpendicular coordinate used during the fit (or just an index if there was no perpendicular coordinates). This works on the parameters “saved for reusing later”, the ones “exported” can be read using the standard load-as-text command, possibly by specifying the option /comments=# to avoid ignoring lines that start with text (dataset names).

### expand – Expand

expand /expand-meta=meta-data /flags=flags /group-columns=integer /perp-meta=text /reversed=yes-no /set-meta=meta-data /style=style /x-columns=integer /x-every-nth=integer

• /expand-meta=meta-data: Expand all the given meta-data, one value per produced dataset – values: comma-separated list of meta-data that will be expanded into individual datasets, see there
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /group-columns=integer: specifies the number of Y columns in the created datasets – values: an integer
• /perp-meta=text: defines meta-data from perpendicular coordinate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /x-columns=integer: specifies the number X columns – values: an integer
• /x-every-nth=integer: specifies the number of columns between successive X values – values: an integer

If a dataset contains several columns, QSoas only displays the second as a function of the first. expand splits the current dataset into as many datasets as there are Y columns, ie a X, Y1, Y2, Y3 dataset will be split into three datasets: X, Y1; X, Y2 and X, Y3.

If /perp-meta is specified, then the given meta-data will be defined for each dataset, based on the value of the perpendicular coordinates.

By default, expand assumes that the first column is the only X column. However, if you give a number to the /x-every-nth= option, then expand assumes that a X column is every that many columns. For instance, /x-every-nth=2 means that the layout of the dataset is X1 Y1 X2 Y2 X3 Y3…

By default, expand splits every Y column into its own dataset. However, it is possible to group them using the /group-columns option. For instance, splitting a X Y1 Y2 Y3 Y4 dataset with:

QSoas> expand /group-columns=2

will result in two datasets: X Y1 Y2 and X Y3 Y4.

The option /x-columns has a similar effect, but for the X columns. It gives the number of columns at the beginning of the dataset that will be considered as X columns. For instance, if you split a X1 X2 Y1 Y2 Y3 dataset with the command:

QSoas> expand /x-columns=2

You will get three datasets, X1 X2 Y1, X1 X2 Y2 and X1 X2 Y3.

The option /expand-meta will expand the meta-data whose name is listed. It requires the meta are lists whose size is exactly the number of datasets to be created. See also here.

### rename – Rename

rename new-name

Other name: a

• new-name: New name – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Changes the name of the current dataset. To help track the operations applied to a dataset, its name is modified and gets longer after each modification. Use rename to give it a more meaningful (and shorter) name.

If you need to rename a large number of datasets, you probably want to try save-datasets with /mode=rename.

### save – Save

save file /comments=text /mkpath=yes-no /number-format=text /overwrite=yes-no /row-names=yes-no /separator=text

Other name: s

• file: File name for saving – values: name of a file
• /comments=text: prefix for the comments – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /mkpath=yes-no: If true, creates all necessary directories – values: a boolean: yes, on, true or no, off, false
• /number-format=text: printf-like format string for numbers – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /overwrite=yes-no: If true, overwrite without prompting – values: a boolean: yes, on, true or no, off, false
• /row-names=yes-no: Wether to write row names or not – values: a boolean: yes, on, true or no, off, false
• /separator=text: column separator (default: tab) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Saves the current dataset to a file. This command will ask you before overwriting an existing file, unless /overwrite=true was specified.

The name of the current dataset will be changed to match the name of the file.

The following options control the output format: * /separator specifies what separates the column of numbers (defaults to a tabulation). * /row-names specifies if the names of the rows are written out in the first column; it is off by default. * /number-format to fine-tune the way the numbers are written.

If you use /row-names=true, you should reload the saved file using

QSoas> load-as-text /text-columns=1 file.dat

The /number-format= option can be used to specify a “sprintf-like” format for writing the numbers. See Ruby’s sprintf for more information. For instance, if you want to produce text files that could be included into a LaTeX document using siunitx, you could use:

QSoas> save table.tex /separator=& /number-format=\num{%g}

Be warned that QSoas is most probably not going to be able to detect automatically the format you use for saving if you use custom separators and/or formats.

### save-datasets – Save

save-datasets datasets… /comments=text /expression=text /format=text /mkpath=yes-no /mode=choice /number-format=text /overwrite=yes-no /row-names=yes-no /separator=text

Other name: save-buffers

• datasets…: datasets to save – values: comma-separated lists of datasets in the stack, see dataset lists
• /comments=text: prefix for the comments – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /expression=text: a Ruby expression to make file names – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /format=text: overrides dataset names if present – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /mkpath=yes-no: if true, creates all necessary directories (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /mode=choice: if using /format or /expression, whether to just save, to just rename or both (defaults to ‘both’) – values: one of: both, rename, save
• /number-format=text: printf-like format string for numbers – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /overwrite=yes-no: if false, will not overwrite existing files (warning: default is true) – values: a boolean: yes, on, true or no, off, false
• /row-names=yes-no: Wether to write row names or not – values: a boolean: yes, on, true or no, off, false
• /separator=text: column separator (default: tab) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Saves the designated datasets to files.

Unlike the save command, this saves the datasets using their current names, and does not prompt for a file name. It is probably a good idea to use rename first, or use the possibilities below.

This command can rename the datasets before saving them, by using a printf-like format, as in the following case, which renames the first 101 datasets to Buffer-000.dat, Buffer-001.dat, and so on:

QSoas> save-datasets /format=Buffer-%03d.dat 0..100

It is also possible to use a full-blown Ruby expression (returning a string) that will be aware of the dataset’s meta-data:

It is possible to sort in the reverse order using /reversed=true. By default, the statistics are not available, but you can use /use-stats=true to make them available under the variable $stats (as usual). Important this command modifies directly the stack, it is not possible to undo it, unless you took care of saving the stack before using save-stack. # Basic data manipulation at the dataset level ### apply-formula – Apply formula apply-formula formula (/buffers=)datasets /extra-columns=integer /flags=flags /for-which=code /keep-on-error=yes-no /mode=choice /name=text /reversed=yes-no /set-meta=meta-data /style=style /use-meta=yes-no /use-names=yes-no /use-stats=yes-no Other name: F • formula: formula – values: a piece of Ruby code • (/buffers=)datasets (default option): Datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists • /extra-columns=integer: number of extra columns to create – values: an integer • /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags • /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code • /keep-on-error=yes-no: if on, the points where the Ruby expression returns a error are kept, as invalid numbers – values: a boolean: yes, on, true or no, off, false • /mode=choice: operating mode used by apply-formula – values: one of: add-column, xyy2, xyz • /name=text: name of the new column (only in ‘add-column’ mode) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ • /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false • /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements • /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green • /use-meta=yes-no: if on (by default), you can use $meta to refer to the dataset meta-data – values: a boolean: yes, on, true or no, off, false
• /use-names=yes-no: if on the columns will not be called x,y, and so on, but will take their name based on the column names – values: a boolean: yes, on, true or no, off, false
• /use-stats=yes-no: if on (by default), you can use $stats to refer to statistics (off by default) – values: a boolean: yes, on, true or no, off, false Applies a formula to the current dataset. It should specify how the x and/or y values of the dataset are modified: QSoas> apply-formula x=x**2 QSoas> apply-formula y=sin(x**2) QSoas> apply-formula x,y=y,x The last bit swaps the $x$ and $y$ values of the dataset. The formula must be valid ruby code. In addition to x and y (note the lowercase !), the formula can refer to: • i, the index of the data point • seg, the number of the current segment (starting from 0) • x_0, the value of $x$ of the first point of the current segment • i_0, the index of the first point in the current segment • y2, y3, etc when there are more than 2 columns in the dataset • $c.name refers the value of the column named name, see there

It is possible to modify all of these variables, but only the modifications in x, y, y2 and so on are taken in to account. In particular, the $c.name cannot be used to modify the value of the column name (but see below). Here is how you can use i to have even points draw a sine wave and odd points a cosine: QSoas> apply-formula y=(i%2==0?sin(x):cos(x)) % is the modulo operator. The construction with ? and : (called the ternary operator means: if i%2==0 is true, then the value is sin(x), else cos(x). You can use several instructions by separating them with ;: QSoas> apply-formula x=x**2;y=x**2 This results in x values that are the squares of the old values, and y values that are the square of the new x values. Extra columns initially filled with 0 can be created by using the /extra-columns option: QSoas> apply-formula /extra-columns=1 y2=y**2 This creates a third column (a second y column) containing the square of the values of the Y column. If /use-stats=true is used, a global variable $stats can be used in the Ruby expression. It contains all the statistics displayed by stats. For instance, to normalize the Y values by dividing by the median, one would use:

QSoas> apply-formula /use-stats=true y=y/$stats.y_med Note that you can make use of the special /= operator to shorten that into: QSoas> apply-formula /use-stats=true y/=$stats.y_med

Statistics by segments (see more about segments there) are available too, which means if you want to normalize by the medians of the first segment, you could do

QSoas> apply-formula /use-stats=true y/=$stats[0].y_med If /use-meta is true (the default), then a global variable $meta is defined that contains the value of the meta-data (what is shown by show). What you make of this will greatly depend of the meta-data QSoas has gathered from your file (and those you have set manually using set-meta).

Some results will give “invalid numbers”, such as sqrt(-1). By default, QSoas strips the points corresponding to the invalid results, but you can keep them (as invalid numbers) using /keep-on-error=true (but be aware that working with invalid numbers is a real pain !).

It is now possible to work with several datasets using the /buffers option, and control the resulting datasets using the commands described there.

If the Ruby code uses the Ruby keyword break, then the processing of the dataset ends at that moment, keeping all the data points that have been calculated so far.

#### Using column and row names

It is possible to use column and row names:

• The syntax $c.name refers to the value of the column name. • $row_name is the name of the current value of the row name. It can be used to modify the row names of the dataset.
• It is possible to set the value of a named column directly. This requires using the /use-names=true option which replaces all the standard x, y, y2 names by their real names. Note: this will only work if column names are unique and correct Ruby names. The following command modifies the column names to ensure this is the case:
QSoas> set-column-names /sanitize-names=true

#### Other modes

apply-formula offers two other modes in addition to what is described above, in which all columns have to be modified using either x, y, or their real names.

With /mode=add-column, the value of the expression is used to create a single new column. The other columns are not modified. You can specify the name of the new column using the /name= option.

For instance, the following commands adds a new column name product that contains the product of the columns a and b:

QSoas> apply-formula /mode=add-column $c.a*$c.b /name=product

This is very useful to create a named column in datasets where the number of columns is not known (but their names are).

With /mode=xyz, the whole data is considered as a single $y = f(x,z)$ table. x is the usual value, and z corresponds to the perpendicular coordinates. This mode modifies all but the first column. There is no need to specify y= in the formula.

### dx – DX

dx

Replaces the Y values by the values of delta X, i.e, y[i] = x[i+1] - x[i]. This is useful to see if the X values are equally spaced.

### dy – DY

dy

Same as dx but for Y values: replaces the Y values by the values of delta Y.

### zero – Makes 0

zero value /axis=axis

• value: – values: a floating-point number
• /axis=axis: which axis is zero-ed (default y) – values: one of: x, y

Given an X value, shifts the Y values so that the point the closest to the given X value has 0 as Y value.

If /axis is x, swap X and Y in the above description.

### shiftx – Shift X values

shiftx

Shift X values so that the first point has a X value of 0.

### norm – Normalize

norm (/map-to=)numbers /positive=yes-no

• (/map-to=)numbers (default option): Normalizes by mapping to the given segment – values: several floating-point numbers, separated by :
• /positive=yes-no: whether to normalize on positive or negative values (default true) – values: a boolean: yes, on, true or no, off, false

Normalizes the current dataset by dividing by its maximum value, or, if /positive=false by the absolute value of its most negative value.

If the /map-to option is specified, the original dataset is mapped linearly to the given interval:

norm /map-to=2:4

shifts and scales the original data so that the Y minimum is 2 and the Y maximum is 4.

### deldp – Deldp

deldp (interactive)

With this command, you can click on given data points to remove them. Useful to remove a few spikes from the data. Middle click or q to accept the modifications, hit escape to cancel them.

### edit – Edit dataset

edit

Opens a spreadsheet-like window where you can view and edit the individual values of the current dataset. If you want to save your modification, press the “push new” button.

### sort – Sort

sort (/buffers=)datasets /column=column /flags=flags /for-which=code /reverse=yes-no /reversed=yes-no /set-meta=meta-data /style=style

• (/buffers=)datasets (default option): Datasets to sort – values: comma-separated lists of datasets in the stack, see dataset lists
• /column=column: – values: the number/name of a column in a dataset
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /reverse=yes-no: – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green

Sorts the dataset in increasing X values. It can work on several datasets specified by the /buffers= option, and in that case produces several datasets. The behaviour is controlled by the various options, see there for more information.

In addition, it is possible to control the behaviour of sort using the following options:

• /column specified a column on which the sorting should be done (defaults to the X column);
• if /reversed is true, then the dataset will be sorting in descending order.

### reverse – Reverse

reverse

Reverses the order of all the data points: the last one now becomes the first one, and so on. Though this has no effect on the look of the data, this will impact commands that work with indices, such as cut and the multi-dataset processing commands (such as subtract, div) with /mode=indices.

### rotate – rotates the lines of the dataset

rotate delta

• delta: offset of the rotation – values: an integer

This command “rotates” the dataset: delta points are taken from the end of dataset and put back at the beginning (in the same order). The overall number of points does not change. A negative delta will take points from the beginning to put them at the end.

### strip-if – Strip points

strip-if formula (/buffers=)datasets /flags=flags /for-which=code /reversed=yes-no /set-meta=meta-data /style=style /threshold=integer /use-meta=yes-no /use-stats=yes-no

• formula: Ruby boolean expression – values: a piece of Ruby code
• (/buffers=)datasets (default option): Datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /threshold=integer: If the stripping operation leaves less than that many points, do not create a new dataset – values: an integer
• /use-meta=yes-no: if on (by default), you can use $meta to refer to the dataset meta-data – values: a boolean: yes, on, true or no, off, false • /use-stats=yes-no: if on, you can use $stats to refer to statistics (off by default) – values: a boolean: yes, on, true or no, off, false

Removes all points for which the ruby expression returns true. This can be used for quite advanced data selection:

QSoas> strip-if x>4

This removes all points whose X value is greater than 4.

QSoas> strip-if x>4||x<2

This removes all points whose X value is greater than 4 or whose X value is lower than 2. The || bit means OR. In other terms, it keeps only the X values between 2 and 4.

QSoas> strip-if x*y<10&&x>2

This removes all the points for which both the X value is greater than 2 and the product of X and Y is lower than 10.

When reading data files that contain spurious data points (such as text lines containing no data within a file read with load-as-text), QSoas replaces the missing data by weird numbers called NaN (Not a Number). They can be useful at times, but mess up statistics and fits. To remove them, use:

QSoas> strip-if x.nan?||y.nan?

Like in apply-formula, you can use the statistics and the meta-data of the datasets if you use the options /use-meta (on by default) and /use-stats, or also the column names using $c.name. By default, strip-if creates a new dataset regardless of the number of points left (even if there are no points left). Giving a value to the /threshold option will prevent strip-if from creating a new dataset if it has less than that many points. Like the other commands that can produce several datasets in one go, strip-if has a number of options to control how the datasets are produced. ### integrate – Integrate integrate /index=integer • /index=integer: index of the point that should be used as y = 0 – values: an integer Integrate just does the reverse of diff and integrates the current dataset. First data point is the one for which Y=0, unless an index is specified to the /index option, in which case the numbered point ends up being at 0. ### diff – Derive diff /derivative=integer /order=integer • /derivative=integer: the number of the derivative to take, only valid together with the order option – values: an integer • /order=integer: total order of the computation – values: an integer Computes the 4th order accurate derivative of the dataset. This is efficient to compute the derivative of smooth data, but it gives very poor results on noisy data. In general, for derivation of real data, prefer filter-fft, filter-bsplines or auto-reglin, which will give much better results. Starting from QSoas version 2.1, a second mode is available, in which you can choose an arbitrary order for the derivation (has to be less than the number of points of the dataset), via the option /order=, and an optional derivative via the /derivative option. For instance, you can reproduce the effect of diff2 using: QSoas> diff /order=4 /derivative=2 ### diff2 – Derive twice diff2 Computes the 4th order accurate second derivative of the dataset. The same warnings apply as for diff. ### dataset-options – Options dataset-options /histogram=yes-no /yerrors=column • /histogram=yes-no: whether to show as a histogram (defaults to false) – values: a boolean: yes, on, true or no, off, false • /yerrors=column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’ Sets options for the current dataset: • /yerrors sets the display of errors on Y values, see there for more information on how to specify the columns; • /histogram sets wether or not the dataset should be displayed as a histogram. ### edit-errors – Edit errors edit-errors (interactive) Provides an interface for editing manually the errors attached to each point of the current dataset. This function will create a column containing errors if there is none yet. Pick left and right bounds with the left and right mouse buttons and set the errors within the bounds with i and outside with o. This is typically used to crudely exclude some bits of the dataset from fitting, by setting much larger errors for the bits than for the rest. ### set-row-names – Set row names set-row-names (/names=)words /clear=yes-no • (/names=)words (default option): Names of the columns – values: several words, separated by ‘’ • /clear=yes-no: Removes all the names – values: a boolean: yes, on, true or no, off, false Sets names of the rows. The names can either be a simple list, or a series of specifications like #10:name, #-4:name or #1..5:name, which sets the row name to name to respectively the 11th row (indices are 0-based), to the 4th starting from the end, or to all between the second and the sixth row (included). ### set-column-names – Set column names set-column-names (/names=)words /clear=yes-no /columns=columns /sanitize-names=yes-no • (/names=)words (default option): Names of the columns – values: several words, separated by ‘’ • /clear=yes-no: Removes all the names – values: a boolean: yes, on, true or no, off, false • /columns=columns: Sets the names of these columns only – values: a comma-separated list of columns names • /sanitize-names=yes-no: Adapts the names so that they can be used with apply-formula /use-names=true – values: a boolean: yes, on, true or no, off, false Sets the column names to the lsit of names given. By default, the names given apply in order (and the other ones are left untouched), but you can choose which column(s) to apply to using the /columns= option. For instance, this sets only the name of the 5th column (corresponding to y4): QSoas> set-column-names new_y4 /column=y4 /clear=yes clears all the column names, so they are back to the default values (x, y, y2 and so on). /sanitize-names=true will make the column names suitable for use for apply-formula with /use-names=true ## Splitting the dataset in bits (and back) ### cut – Cut cut (interactive) Other name: c Interactively cuts bits out of the dataset. Left and right mouse clicks set the left and right limits. Middle click or q quits leaving only the part that is within the region, while u leaves only the outer part. r remove the part inside the region, but lets you keep on editing the dataset. Hit escape to cancel. By default, the Y values are displayed as a function of the index; you can switch back to display Y values as a function of X by hitting x. ### chop – Chop dataset chop (/lengths=)numbers /flags=flags /from-meta=text /mode=choice /reversed=yes-no /set-meta=meta-data /set-segments=yes-no /style=style • (/lengths=)numbers (default option): Lengths of the subsets – values: several floating-point numbers, separated by , • /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags • /from-meta=text: – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ • /mode=choice: Whether to cut on index or x values (default) – values: one of: deltax, index, indices, xvalues • /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false • /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements • /set-segments=yes-no: Whether to actually cut the dataset, or just to set segments where the cuts would have been – values: a boolean: yes, on, true or no, off, false • /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green Cuts the dataset into several parts based on the numbers given as arguments, and save them as separate datasets. The intepretation of the numbers depends on the value of the /mode option: • deltax (default): the numbers are the length (in terms of X) of the sub-datasets • xvalues: the numbers are the X values at which to split • index (or indices): the numbers are the indices of the points at which to split If /set-segments is on, the X values are not used to create independent datasets but rather to set the position of the segments. If the option /from-meta is used, it designates a meta-data containing a list of values. In that case, the values given on the command-line are ignored, and the values contained in the meta are used instead. ### splita – Split first splita Returns the first part of the dataset, until the first change of sign of $\Delta x$. Useful to get the forward scan of a cyclic voltammogram. ### splitb – Split second splitb Returns the part of the dataset after the first change of sign of $\Delta x$. Useful to get the backward scan of a cyclic voltammogram. ### split-monotonic – Split into monotonic parts split-monotonic /flags=flags /group=integer /keep-first=integer /keep-last=integer /reversed=yes-no /set-meta=meta-data /style=style • /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags • /group=integer: Group that many segments into one dataset – values: an integer • /keep-first=integer: Keep only the first n elements of the results – values: an integer • /keep-last=integer: Keep only the last n elements of the results – values: an integer • /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false • /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements • /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green Splits the dataset into datasets where all parts have X values that increase or decrease monotonically. With /group=2, each resulting dataset will contain two monotonic segments. Using the /keep-first or /keep-last options make it possible to only keep a given number of the generated datasets. ### unwrap – Unwrap unwrap /reverse=yes-no /scan-rate=number • /reverse=yes-no: If true, reverses the effect of a previous unwrap command – values: a boolean: yes, on, true or no, off, false • /scan-rate=number: Sets the scan rate – values: a floating-point number This command makes the X values of the current dataset monotonic by ensuring that the value of $\Delta x$ always have the same sign, changing it if needed. The command places segments limits at the position of the changes in direction. This is useful for instance to convert a cyclic voltammogram from $i = f(E)$ to $i = f(t)$; for that purpose, the scan rate can be provided using the /scan-rate= option, or can be guessed from the sr meta-data. The unwrap operation can be reverted by calling unwrap with /reverse=true, which will use the scan rate information and the position of the segments to reconstruct the original data. ### cat – Concatenate cat buffers… /add-segments=yes-no /contract-meta=meta-data Other name: i • buffers…: Datasets to concatenate – values: comma-separated lists of datasets in the stack, see dataset lists • /add-segments=yes-no: If on (default) segments are added between the old datasets – values: a boolean: yes, on, true or no, off, false • /contract-meta=meta-data: Contracts all the named meta data meta-data lists – values: comma-separated list of meta-data to group into lists, see there Concatenates the datasets given as arguments, adding segment stops inbetween (unless /add-segments=false is used). This can be used to reverse the effect of the previous commands. This does not change the number of columns. If you want to gather several Y columns as a function of the same X, use contract instead. If the option /contract-meta is used, then the meta-data whose names are given to that option will be gathered from all the original datasets and transformed into a meta-data list. See there for more information. ## Dataset’s meta-data and perpendicular coordinates QSoas’ datasets (or buffers) hold more than just columns of numbers. When a file is loaded, QSoas also gathers as much information as possible about that file, such as original file name, file date, and, for file formats supported by QSoas, details about the experimental conditions recorded in that file. These are known as “meta-data”, and can be displayed using the show command. Here are some meta-data of particular signification available to all datasets loaded from files: • file_date is the date of the file • original_file is the file name of the loaded file • age is the how old the file was in seconds when the current QSoas session was started. • commands is the list of commands that have been applied to this dataset since its load/creation. Upon saving using save all meta-data are saved as comments in the text file. Perpendicular coordinates make sense when a dataset has several Y columns. For instance, when the dataset consists in spectra taken at different times, like in the tutorial (or at different solution potentials for a redox titration), then the X values will be the wavelength, and each Y column will correspond to a different time. Then the time is the perpendicular coordinate. One can set the perpendicular coordinate manually using set-perp. Many commands use perpendicular coordinates, most notably transpose (that would convert columns of $y = f(\lambda)$ for different values of $t$ above into columns of $y = f(t)$ for different values of $\lambda$), and all the multi-fit commands, which show parameters as a function of the perpendicular coordinates when applicable. Some of the meta-data has special meaning for QSoas, which uses them for specific functions: Meta-data can be of several types, like text or number, but also lists. See for instance the /type=number-list option of set-meta. ## Selecting datasets and files based on meta-data Some commands, namely flag, unflag and browse accept a /for-which option to select the datasets (or files) they work on based on their properties. The value of the /for-which is a ruby formula that uses the global variables $meta and $stats variables. For instance, the following command flags all the datasets that have a maximum value greater than 1e-4: QSoas> flag all /for-which$stats.y_max>=1e-4

How to test for equality: in ruby, you need to use == to test whether two values are the same. For instance, to flag voltammograms in which the scan rate is 0.1 V/s, you have to use:

QSoas> flag all /for-which $meta.sr==0.1 Replacing the == by = in the code above leads to selecting all the datasets, because $meta.sr=0.1 is always true (see more about the ruby expressions there).

## Meta-data expansion/contraction

Some commands like contract gather several datasets into a single one, or on the contrary, like expand create many datasets from a single one.

By default, the meta-data are either all copied from the source (when creating several datasets), or taken from one of the datasets (when making one from several). However, in some cases, you may want to contract all the values of a meta-data from several datasets into a single meta-data containing a list of the original meta-data, or, conversely, expanding the list by taking one value for each of the dataset produced.

This can be achieved using the relevant /expand-meta or /contract-meta option which takes a list of the names of the meta-data you want to expand/contract.

### show – Show information

show datasets…

• datasets…: Datasets to show – values: comma-separated lists of datasets in the stack, see dataset lists

This command gives detailed information about the datasets given as arguments, such as the number of rows, columns, segments, but also the flags the dataset may have, and all their meta-data:

QSoas> show 0
Dataset 08.oxw: 2 cols, 4975 rows, 1 segments
Flags:
Meta-data:	delta_t_0 = 950	gpes_file = D:\Vincent\140428\08	original-file = /home/vincent/Data/140428/08.oxw
age = 428907.581	steps = 1	title =
file-date = 2014-05-23T21:23:38	exp-time = 14:03:08	comments =
t_0 = 0	E_0 = -0.65	method = chronoamperometry

### set-meta – Set meta-data

set-meta name value /also-record=yes-no /type=choice

• name: name of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• value: value of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /also-record=yes-no: also record the meta-data as if one had used record-meta on the original file – values: a boolean: yes, on, true or no, off, false
• /type=choice: type of the meta-data – values: one of: number, number-list, text

Using set-meta, one can set the value of the named meta-data for the current dataset. name can have any value, it does not have to exist in the list of dataset’s meta-data.

The actual type of the meta-data can be specified using the /type option. For now, it is mostly useful to specify lists of numbers:

QSoas> set-meta injection-times 100,200,300 /type=number-list

This specifies that the meta-data injection times is a list of numbers (and not a text).

Meta-data are not permament, and will be forgotten from a QSoas session to another. To store permanently the meta-data so that it is set again the next time QSoas loads this file, either use the record-meta, or use /also-record=true, which has the same effect as running record-meta on the original file.

### record-meta – Set meta-data

record-meta name value files… /exclude=files /remove=yes-no /type=choice

• name: name of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• value: value of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• files…: files on which to set the meta-data – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /exclude=files: exclude files – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /remove=yes-no: remove the meta rather than adding it – values: a boolean: yes, on, true or no, off, false
• /type=choice: type of the meta-data – values: one of: number, number-list, text

record-meta is the “permanent” version of set-meta. It sets meta-data permanently for a series of files (and not datasets as in the case of set-meta). For instance, after running

QSoas> record-meta pH 7 experiment.dat another.dat

The next time QSoas loads either experiment.dat or another.dat, they will automatically have a meta-data called pH with a value 7.

Behind the scenes The meta-data are stored in special files, one for each of the data files. They are almost plain text files (more precisely, JSON files). The have the names of the original files with a .qsm suffix appended. If you move data files around, you need to also move these files if you want the meta-data to follow.

If you use /remove=true, then the meta-data is removed instead of being added. Important note: you still must provide a value, which will not be used. This way, to remove the meta data added by the previous command, you could use:

QSoas> record-meta /remove=true pH whatever experiment.dat another.dat

### save-meta – Save meta-data back to file

save-meta (/file=)file

• (/file=)file (default option): save for this file – values: name of a file

This command saves the meta-data of the current dataset, either to the “original file”, that is the file the current dataset is derived from, or to the file given as the /file option.

This command does not modify the actual data, just the .qsm file containing the meta-data.

### set-perp – Set perpendicular

set-perp (/coords=)numbers /from-row=integer

• (/coords=)numbers (default option): The values of the coordinates (one for each Y column) – values: several floating-point numbers, separated by ,
• /from-row=integer: Sets the values from the given row (and delete it) – values: an integer

Sets the perpendicular coordinates for the Y columns, as comma-separated values. There must be as many perpendicular coordinates as there are Y columns.

Another possibility is to specify a row using /from-row. In that case, the perpendicular coordinates are taken from the values of the row (the first element, corresponding to the x value, is ignored), and the row is deleted. This is useful when the text data contains the perpendicular coordinates as a “text header”.

### transpose – Transpose

transpose

This command transposes the matrix of the Y columns, while paying attention to the perpendicular coordinates. In short, if one starts from a series of Y columns representing spectra as a function of $\lambda$ (the X column) for different values of time (each column at at different value of $t$), then after transpose, the new dataset contains columns describing the time evolution of the absorbance for different values of $\lambda$ (one for each column).

### tweak-columns – Tweak columns

tweak-columns /flip=yes-no /flip-all=yes-no /remove=columns /select=columns

• /flip=yes-no: If true, flips all the Y columns – values: a boolean: yes, on, true or no, off, false
• /flip-all=yes-no: If true, flips all the columns, including the X column – values: a boolean: yes, on, true or no, off, false
• /remove=columns: the columns to remove – values: a comma-separated list of columns names
• /select=columns: select the columns to keep – values: a comma-separated list of columns names

tweak-columns provides means to remove and select columns.

If a list of columns is given to the /remove option, then the given columns are removed. If /flip is on, then all Y columns are reversed. If /flip-all is on, then all columns, including the X column, are reversed.

If a list of columns is given to the /select option, then the newly created dataset will be composed only of the columns specified, in the order they are specified. The columns can be used more than once.

### split-on-values – Split on column values

split-on-values meta… columns /flags=flags /reversed=yes-no /set-meta=meta-data /style=style

• meta…: Names of the meta to be created – values: several words, separated by ‘,’
• columns: Columns whose values one should split on – values: a comma-separated list of columns names
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green

This command splits the current dataset into a number of datasets, based on the contents of the columns columns. Each newly created dataset correspond to points in the original dataset that had exactly the same values in the designated columns. These columns are remove from the newly created datasets and the values are used to set the meta-data meta. There must be as many comma-separated names in meta as there are colunm names in columns.

# Data filtering/processing

QSoas provides different ways to process data to remove unwanted noise:

In addition, QSoas provides ways to remove calculated “baselines”:

### filter-fft – FFT filter

filter-fft /derive=integer (interactive)

• /derive=integer: The starting order of derivation – values: an integer

Filters data using FFT, ie the data is Fourier transformed, then a filter function is applied in the frequency domain and the result is backward transformed.

The cutoff can be changed using the mouse left/right buttons. The power spectrum can be displayed using the p key, and the derivative can be displayed with d (in which case you get the derivative of the signal when accepting the data).

Behind the scenes, a cubic baseline is computed and subtracted from the data to ensure that the data to which the FFT is applied has 0 value and 0 derivative on both sides. This greatly reduces artifacts at the extremities of the dataset. This baseline is computed using a small heuristic. You can display it using the b key.

If you want to do that non-interactively, look at auto-filter-fft.

### filter-bsplines – B-Splines filter

filter-bsplines /weight-column=column (interactive)

• /weight-column=column: Use the weights in the given column – values: the number/name of a column in a dataset

Filters the data using B-splines: B-splines are polynomial functions of a given order defined over segments. The filtering process finds the linear combination of these spline functions that is the closest to the original data.

This approach amounts to taking the projection of the original data onto the subspace of the polynomial functions.

The result can be tuned by placing “nodes”, ie the X positions of the segments over which the splines are defined. Put more nodes in an area where the data is not described properly by the smoothed function. Increasing the order (using +) may help too.

Like for filter-fft, you can derive the data as well pushing the d key.

Hitting the o key optimizes the position of the segments in order to minimize the difference between the data and the approximation. (be careful as this function may fail at times).

If you want to do that non-interactively, look at auto-filter-bs.

### auto-filter-bs – Auto B-splines

auto-filter-bs (/buffers=)datasets /derivatives=integer /flags=flags /for-which=code /number=integer /optimize=integer /order=integer /reversed=yes-no /set-meta=meta-data /style=style /weight-column=column

Other name: afbs

• (/buffers=)datasets (default option): Datasets to filter – values: comma-separated lists of datasets in the stack, see dataset lists
• /derivatives=integer: computes derivatives up to this number – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /number=integer: number of segments – values: an integer
• /optimize=integer: number of iterations to optimize the position of the nodes (defaults to 15, set to 0 or less to disable) – values: an integer
• /order=integer: order of the splines – values: an integer
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /weight-column=column: uses the weights in the given column – values: the number/name of a column in a dataset

Filters the data using B-splines in a non-interactive fashion. Performs automatically an optimization step, like hitting o in filter-bsplines, with a number of iterations that is configurable using the /optimize= option (0 disables that altogether).

This is mostly useful in scripts.

### auto-filter-fft – Auto FFT

auto-filter-fft (/buffers=)datasets /cutoff=integer /derive=integer /flags=flags /for-which=code /reversed=yes-no /set-meta=meta-data /style=style /transform=yes-no

Other name: afft

• (/buffers=)datasets (default option): Datasets to filter – values: comma-separated lists of datasets in the stack, see dataset lists
• /cutoff=integer: value of the cutoff – values: an integer
• /derive=integer: differentiate to the given order – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /transform=yes-no: if on, pushes the transform (off by default) – values: a boolean: yes, on, true or no, off, false

Filters data using FFT in a non-interactive fashion. Useful in scripts.

With /transform=yes, pushes the Fourier transform of the data, in the format:

freq magnitude real imag

### auto-reglin – Automatic linear regression

auto-reglin /filter=yes-no /window=integer

• /filter=yes-no: If true (not the default), filter the data instead of computing the slope – values: a boolean: yes, on, true or no, off, false
• /window=integer: Number of points (after and before) over which to perform regression – values: an integer

Performs a linear regression on a number of points around each point of the graph and creates a dataset from the resulting slopes, which results in a derivative dataset. This command is similar to but provides less noisy output than diff, and also similar to filtering with FFT (using filter-fft) and taking the derivative.

The option /window specifies the number of points on either side of each point used for linear regression (defaults is 7, so the linear regression is made over 15 points in total).

With /filter=true, the linear regression is used to predict values of the points, which acts as a filter of the data, just like filter-fft or filter-bsplines.

### kernel-filter – Kernel filter

kernel-filter /alpha=number /size=integer /threshold=number /type=choice

• /alpha=number: Gaussian spread (only for gaussian) – values: a floating-point number
• /size=integer: Half window size – values: an integer
• /threshold=number: Threshold for impulse filters – values: a floating-point number
• /type=choice: Kernel type – values: one of: gaussian, impulse-iqr, impulse-mad, impulse-qn, impulse-sn, median, rmedian

This command filters the data using different filters that have in command to work on a small number of points at the same time (given as argument to the /size argument, which corresponds to the half-width).

The filters available are: * gaussian, a gaussian kernel (see there), whose spread can be parametrized using the /alpha option; * median and rmedian are median and recursive median filters (see there); * impulse-iqr, impulse-mad, impulse-qn and impulse-sn are various types of impulsion detection filters (see there), parametrized using the /threshold= option.

### remove-spikes – Remove spikes

remove-spikes /factor=number /force-new=yes-no /number=integer

Other name: R

• /factor=number: threshold factor – values: a floating-point number
• /force-new=yes-no: creates a new dataset even if no spikes were removed (default: false) – values: a boolean: yes, on, true or no, off, false
• /number=integer: looks at that many points – values: an integer

Removes spikes using a simple heuristic: a point is considered a “spike” if over the /number points, the difference between this point and the ones next to it are larger than /factor times the other differences in the interval. This command will not create a new dataset if not spikes were removed, unless you specify /force-new=true, in which case the dataset is duplicated; this is useful for scripting, when you need a reproducible number of created datasets, regardless of whether spikes are present or not.

### downsample – Downsample

downsample /factor=integer

• /factor=integer: Downsampling factor – values: an integer

Creates a dataset with about factor times less points than the original dataset (default 10 times less) by averaging the original X and Y values in groups of factor. This command averages the other columns too.

### baseline – Baseline

baseline (interactive)

Other name: b

Draw a baseline by placing markers on the curve using the mouse (or off the curve, after using key o). Baseline is computing using one of several interpolation algorithms: C-splines, linear or polynomial interpolation and Akima splines (the latter usually follows best the accidents on the curve). Cycle between the various schemes by hitting t.

It is possible to leave saving not the interpolated data, but just the interpolation “nodes” (ie the big dots), by pushing the p key. This has two advantages: first, one can load nodes from a dataset by hitting the L key and providing the dataset number (or just their X value by hitting l). Second, if one has the nodes and just the X values, one can generate the interpolated data using interpolate.

The area between the baseline and the curve is displayed in the terminal. If the dataset has a meta-data named sr, it is taken as a scan rate (as in cyclic voltammetry), and the charge is displayed too.

### interpolate – Interpolate

interpolate xvalues nodes /type=choice

• xvalues: Dataset serving as base for X values – values: a dataset in the stack. Can be designated by its number or by a flag (if it’s unique)
• nodes: Dataset containing the nodes X/Y values – values: a dataset in the stack. Can be designated by its number or by a flag (if it’s unique)
• /type=choice: Interpolation type – values: one of: akima, linear, polynomial, spline

Given a dataset containing xvalues and another one containing the X/Y position of interpolation nodes saved using p from within baseline, this command regenerates the interpolated values, for the given X values.

Through this approach, one can draw a baseline, save the points, generate the baseline-subtracted data using interpolate from within a script. This has the advantage that one can always have a close look at the quality of the baseline, and tweak it if need be.

### catalytic-baseline – Catalytic baseline

catalytic-baseline (interactive)

Other name: B

Draws a so-called “catalytic” baseline. There are several types of baselines, but they all share the following features:

• they are defined by 4 points
• the first two points correspond to points where the baseline sticks to the data
• the last two points give a “direction”

There are two baselines implemented for now:

• a cubic baseline, that goes through the first two points and is parallel to the slope of the last two
• an exponential baseline, that goes through the first two points and has the same ratio as the data for the last two points

### solve – Solves an equation

solve formula /iterations=integer /max=text /min=text /prec-absolute=number /prec-relative=number

• formula: An expression of the y variable – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /iterations=integer: Maximum number of iterations before giving up – values: an integer
• /max=text: An expression giving the upper boundary for dichotomy approaches – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /min=text: An expression giving the lower boundary for dichotomy approaches – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /prec-absolute=number: absolute precision required – values: a floating-point number
• /prec-relative=number: relative precision required – values: a floating-point number

Solves an equation on $y$ on the current dataset. For instance,

QSoas> solve y**2-x

solves for $y$ the equation $y^2 - x = 0$.

By default, the algorithm used is an iterative process starting from the current value of $y$ (i.e. the value before the command starts). You can use a dichotomy approch by specifying upper and lower bounds using the /min= and /max= options:

QSoas> solve y**2-x /min=0 /max=x

### auto-correlation – Auto-correlation

auto-correlation

Other name: ac

Computes the auto-correlation function of the data, using FFT.

### bin – Bin

bin /boxes=integer /column=column /log=yes-no /max=number /min=number /norm=yes-no /weight=column

• /boxes=integer: – values: an integer
• /column=column: – values: the number/name of a column in a dataset
• /log=yes-no: – values: a boolean: yes, on, true or no, off, false
• /max=number: Maximum value of the histogram, overrides the maximum of the values in the data – values: a floating-point number
• /min=number: Minimum value of the histogram, overrides the minimum of the values in the data – values: a floating-point number
• /norm=yes-no: – values: a boolean: yes, on, true or no, off, false
• /weight=column: – values: the number/name of a column in a dataset

Creates an histogram by binning the Y values (or the values of the column given by the /column option, see above) into various boxes (whose number can be controlled using the /boxes option). The new dataset has for X values the center of the boxes and as Y values the number of data points that were in the boxes.

By default, all original points have a weight of 1. You can specify a column number using the /weight= option that contains the weight of each point.

The range of values used is automatically deduced from the data, but you can use the /min= and /max= options to set it manually.

### add-noise – Add noise

add-noise sigma /distribution=choice /seed=integer

• sigma: ‘Amplitude’ of the noise – values: a floating-point number
• /distribution=choice: The noise distribution – values: one of: cauchy, gaussian, uniform
• /seed=integer: The generator seed. If not specified or negative, uses the current time – values: an integer

This command adds random noise following the distribution given as the /distribution option (default is uniform noise) with the given “amplitude” (the scale parameter of the distributions).

It is possible to obtain reproducible results by using a given /seed parameter.

### linear-least-squares – Linear least squares

linear-least-squares formula (/buffers=)datasets /accumulate=meta-data /for-which=code /meta=meta-data /output=yes-no /set-meta=meta-data /use-meta=yes-no /use-names=yes-no /use-stats=yes-no

• formula: formula – values: a piece of Ruby code
• (/buffers=)datasets (default option): Buffers to work on – values: comma-separated lists of datasets in the stack, see dataset lists
• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /use-meta=yes-no: if on (by default), you can use $meta to refer to the dataset meta-data – values: a boolean: yes, on, true or no, off, false • /use-names=yes-no: if on the columns will not be called x,y, and so on, but will take their name based on the column names – values: a boolean: yes, on, true or no, off, false • /use-stats=yes-no: if on (by default), you can use $stats to refer to statistics (off by default) – values: a boolean: yes, on, true or no, off, false

Linear least squares runs a linear least squares minimization of the given formula to the current dataset (or to the ones specified by /buffers and /for-which). As the linear least squares problem has a single analytical solution, there is no need for a fit interface like for the fit- commands, which are tuned for non-linear problems.

The formula is a function of x which contains arbitrary parameters (which do not start with an uppercase).

Try for instance:

QSoas> generate-dataset 0 1 x**2+2*x+3
QSoas> linear-least-squares a*x**2+b*x+c

The results of the operation are the values of the parameters, which can be sent to the output file, to meta-data or to the accumulator, see there for more details.

Important warning QSoas does not try to check that the dependency of the formula on the parameters is truly linear. If that is not the case, you will simply get nonsensical answers.

### contour – Contours

contour levels… /flags=flags /reversed=yes-no /set-meta=meta-data /style=style

• levels…: levels at which to contour – values: several floating-point numbers, separated by ,
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green

This command assumes that the dataset can be interpreted in the form of $y = f(x,z)$ data (see there, which means that the perpendicular coordinate has been correctly setup.

The command compute the contours for the listed values of $y$, and creates a new dataset for each contour found. There can be more than one contour for a single value of a level.

For instance, try out:

QSoas> generate-dataset -2 2 /columns=400 /samples=400
QSoas> transpose
QSoas> apply-formula x=4*i/399.0-2
QSoas> apply-formula /mode=xyz r=(x**2+z**2)**0.5;sin(PI*r)/r
QSoas> contour 0

# Segments

It is possible to split a dataset into logical segments without changing the contents of the dataset. The position of the segment boundaries are marked by a vertical line. They can be used for different purposes: for segment-by-segment operations, step-by-step film loss correction (using film-loss) or dataset splitting (using segments-chop).

Segments can be detected using find-steps, or set manually using set-segments or chop.

It is possible to remove the segments from a dataset by using clear-segments.

### find-steps – Find steps

find-steps /average=integer /set-segments=yes-no /threshold=number

• /average=integer: Average over that many points – values: an integer
• /set-segments=yes-no: Whether or not to set the dataset segments – values: a boolean: yes, on, true or no, off, false
• /threshold=number: Detection threshold – values: a floating-point number

This function detects “jumps” in the data (such as potential changes in a chronoamperometry experiment, for instance), and display them both to the terminal output and on the data display.

By default, this function only shows the segments it finds, but if the option /set-segments is on, the segments are set to that found by find-steps (removing the ones previously there).

### set-segments – Set segments

set-segments (interactive)

Interactively prompts for the addition/removal of segments. A left click adds a segment where the mouse is, while a right click removes the closest segment.

### segments-chop – Chop into segments

segments-chop /expand-meta=words /flags=flags /reversed=yes-no /set-meta=meta-data /style=style

• /expand-meta=words: Expand all the given meta-data, one value per produced dataset – values: several words, separated by ‘’
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green

Cuts the dataset into several ones based on the segments defined in the current dataset. This way, the effect of a chop /set-segment=true followed by segments-chop is the same as the chop without /set-segment=true.

If the option /expand-meta is used, the corresponding meta-data lists are split into the individually created datasets, see here for more information.

### clear-segments – Clear segments

clear-segments

Removes all the segments from the current dataset.

### film-loss – Film loss

film-loss (interactive)

Applies stepwise film loss correction (in the spirit of the $K_m$ experiments in Fourmond et al, Anal. Chem., 2009). For that, the current dataset must be separated into segments, using set-segments, for instance. QSoas then zooms on the first segment. Right and left clicking around the final linear decay will set the value of the film loss rate constant for this step. Push space to switch to the next step, and when you have done everything, push q to obtain the corrected data.

# Operations involving several datasets

It is possible to combine several datasets into one by applying mathematical operations (subtraction, division and the like). Each of these processes involve matching a data point of a dataset to a data point of another one. There are several ways to do that, chosen by the /mode option:

• with /mode=xvalues, the default, uses the values of X (ie the closest X value is picked). This mode will not allow values of X too far from either end of the dataset to be matched. Warning this will not work properly for datasets with several times the same X values, like cyclic voltammograms.
• /mode=extend is the same as /mode=xvalues, but it allows arbitrary extension, so that in effect, the first and last values of the dataset are repeated ad infinitum. This used to be the default behaviour, but it can cause confusing mistakes sometimes.
• With /mode=strict, the X values have to match exactly. If no matching x value is found, then a NaN value is used. Values in the second dataset corresponding to X values not in the first are simply ignored.
• with /mode=indices, points are matched on a one-to-one basis, ie point 1 of dataset 1 to point 1 of dataset 2, irrespective of the X values.

In addition to that, the operations can make use of the segments defined on each dataset (see find-steps and set-segments). If segments are defined and /use-segments=true, then the operations are applied segment-by-segment, with the first point of each segment matching the corresponding point in the other dataset. This mode is suited to combine two datasets that are divided into logical bits (such as chronoamperograms with steps at different potentials) whose exact details (beginnings and duration of the steps) vary a a little.

### subtract – Subtract

subtract buffers… /flags=flags /mode=choice /reversed=yes-no /set-meta=meta-data /style=style /use-segments=yes-no

Other name: S

• buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

Subtracts the last dataset from all the previous ones. Useful for standard baseline removal.

### div – Divide

div buffers… /flags=flags /mode=choice /reversed=yes-no /set-meta=meta-data /style=style /use-segments=yes-no

• buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

Divides all datasets by the last one. Just as subtract is useful to remove one of a multicomponent response when they are additive, div can be used to remove one of the components when they are multiplicative, like film loss in protein film voltammetry experiments, see Fourmond et al, Anal. Chem. 2009 for more information.

### add – Add

add buffers… /mode=choice /use-segments=yes-no

• buffers…: Datasets to add – values: comma-separated lists of datasets in the stack, see dataset lists
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

Adds all the given datasets and pushes the result (a single dataset).

### multiply – Multiply

multiply buffers… /mode=choice /use-segments=yes-no

Other name: mul

• buffers…: Datasets to add – values: comma-separated lists of datasets in the stack, see dataset lists
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

Multiplies all the given datasets and pushes the result (a single dataset).

### average – Average

average buffers… /count=yes-no /mode=choice /split=yes-no /use-segments=yes-no

• buffers…: Datasets to average – values: comma-separated lists of datasets in the stack, see dataset lists
• /count=yes-no: If on, a last column contains the number of averaged points for each value – values: a boolean: yes, on, true or no, off, false
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /split=yes-no: If on, the datasets are automatically split into monotonic parts before averaging. – values: a boolean: yes, on, true or no, off, false
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

In a manner similar to subtract and div, the average command averages all the datasets given into one, with the same segment-by-segment capacities.

An additional feature of average is its ability to first split the datasets into monotonic parts before averaging (when /split is on). That is the default when only one dataset is provided. This proves useful for averaging the forward and return scan in a cyclic voltammogram.

### merge – Merge datasets based on X values

merge buffers… /mode=choice /use-segments=yes-no

• buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

Merges the second dataset with the first one, and keep Y of the second as a function of Y of the first. The algorithm for finding which point in the second corresponds to a given one in the first is the same as that of the other commands in this section (subtract, div…).

If more than two datasets are specified, the last one gets merged with each of those before.

### contract – Group datasets on X values

contract buffers… /contract-meta=meta-data /mode=choice /perp-meta=text /use-columns=columns /use-segments=yes-no

• buffers…: Datasets to contract – values: comma-separated lists of datasets in the stack, see dataset lists
• /contract-meta=meta-data: Contracts all the named meta data meta-data lists – values: comma-separated list of meta-data to group into lists, see there
• /mode=choice: Whether operations try to match x values or indices – values: one of: extend, indices, strict, xvalues
• /perp-meta=text: defines the perpendicular coordinate from meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /use-columns=columns: if specified, uses only the given columns for the contraction – values: a comma-separated list of columns names
• /use-segments=yes-no: If on, operations are performed segment-by-segment – values: a boolean: yes, on, true or no, off, false

contract does the reverse of expand, ie it regroups in one dataset several values of Y that run against the same values of X. The result is a dataset that contains as many Y columns as the total of Y columns of all the arguments. X matching between the datasets is done as for the other commands in this section (div or subtract).

You can specify a column list using /use-columns (see above for more information about column lists), in which case the other columns from the datasets are ignored.

If you specify one or several names of meta using the /contract-meta option, their values will be gathered into a list of meta-data (instead of keeping the value of the first dataset). See also here.

# Data inspection facilities

## Options for data output

The commands below (and some others too, like eval) are able to compute a number of quantities from the datasets they work on, such as various statistics, the position of peaks, and so on. QSoas provides several ways to store and work with these data.

### Saving to the output file

The “traditional” way is to store the data in the output file. They end up as TAB-separated data, with an generally explicit header, and the name of the dataset the data is extracted from on the first column. When outputting to the output file, you can force the writing of extra columns containing some meta-data by listing them using the /meta-data= option.

### Saving as meta-data

It is also possible to use the /set-meta= option to “decorate” the datasets with the results of the command, as meta-data. For instance: running

QSoas> stats /set-meta=y_min

sets the y_min meta-data to the minimum value of the $y$ column of the dataset. It is also possible to select several meta-data, separating them using commas, or even change their name, such as

QSoas> stats /set-meta=y_min->my_interesting_meta

which saves also the minimum of the $y$ column as meta-data, but this time under the name my_interesting_meta.

You can save all the data in one go under their original name using /set-meta=*.

### Combining /accumulate= and pop to create new datasets on the fly

It is now possible to generate a data from scratch using the /accumulate= option. This option takes an ordered list of output values (and, possibly meta-data), and accumulates the values to a “hidden” dataset, until the command pop is called. For instance, running on different datasets the following command:

QSoas> 1 /output=false /accumulate=x,y,area

will populate a dataset with 3 columns, containing respectively the X position, Y position, and area of the major peak of the datasets (with possibly extra columns for meta-data).

This command is typically used to parse a whole series of datasets using run-for-each or run-for-datasets.

### pop – Pop accumulator

pop /drop=yes-no /status=yes-no

• /drop=yes-no: Drop the accumulator instead of pushing it on the stack – values: a boolean: yes, on, true or no, off, false
• /status=yes-no: Gets the status of the accumulator – values: a boolean: yes, on, true or no, off, false

A number of commands can accumulate data to a “hidden” dataset using the /accumulate= options. The pop command takes that dataset, pushes it to the stack, and clears the “hidden” dataset.

With /drop=yes, the “hidden” dataset is just clear, it is not pushed onto the stack.

With /status=yes, this command just shows the current status of the hidden dataset.

### find-peaks – Find peaks

find-peaks /accumulate=meta-data /include-borders=yes-no /meta=meta-data /output=yes-no /peaks=integer /save-parameters=file /set-meta=meta-data /threshold=number /which=choice /window=integer

• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /include-borders=yes-no: whether or not to include borders – values: a boolean: yes, on, true or no, off, false
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /peaks=integer: Display only that many peaks (by order of intensity) – values: an integer
• /save-parameters=file: a file in which to save the peak parameters as fit parameters – values: name of a file
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /threshold=number: threshold for the peak Y value – values: a floating-point number
• /which=choice: selects which of minima and/or maxima to find – values: one of: both, max, min
• /window=integer: width of the window – values: an integer

Find all the peaks of the current dataset. Peaks are local extrema over a window of a number of points given by /window (8 by default). If /output is on, then the peak data is written to the output file. This function will find many peaks on noisy data, you can limit to the first n ones by using /peaks=n (peaks are ranked by amplitude with respect to the average of the dataset).

By default, if a point at either end of the dataset is an extremum, it is not included, unless you use /include-borders=true.

Peaks are indicated on the dataset using lines, and their position is written to the terminal. In addition, if /output is on (off by default), they are also written to the output file.

With the /save-parameters option, you can save the position of the peaks as a “fit parameter file”, which you can reload later, in a peak fit for instance, as a help to properly set the initial values. For this to work, you probably need to manually edit the parameters file (with any text editor) to give the parameters the names corresponding to the ones of the fit.

### echem-peaks – Find peaks pairs

echem-peaks /accumulate=meta-data /include-borders=yes-no /meta=meta-data /output=yes-no /pairs=integer /save-parameters=file /set-meta=meta-data /threshold=number /which=choice /window=integer

• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /include-borders=yes-no: whether or not to include borders – values: a boolean: yes, on, true or no, off, false
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /pairs=integer: Display (and output) only that many peak pairs (by order of intensity) – values: an integer
• /save-parameters=file: a file in which to save the peak parameters as fit parameters – values: name of a file
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /threshold=number: threshold for the peak Y value – values: a floating-point number
• /which=choice: selects which of minima and/or maxima to find – values: one of: both, max, min
• /window=integer: width of the window – values: an integer

This function tries to find “pairs” of peaks that may be the anodic and cathodic peaks of a redox couple, and outputs useful information about those.

### 1 – Find peak

1 /accumulate=meta-data /include-borders=yes-no /meta=meta-data /output=yes-no /save-parameters=file /set-meta=meta-data /threshold=number /which=choice /window=integer

• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /include-borders=yes-no: whether or not to include borders – values: a boolean: yes, on, true or no, off, false
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to true) – values: a boolean: yes, on, true or no, off, false
• /save-parameters=file: a file in which to save the peak parameters as fit parameters – values: name of a file
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /threshold=number: threshold for the peak Y value – values: a floating-point number
• /which=choice: selects which of minima and/or maxima to find – values: one of: both, max, min
• /window=integer: width of the window – values: an integer

Equivalent to

QSoas> find-peaks /peaks=1 /output=true

### 2 – Find two peaks

2 /accumulate=meta-data /include-borders=yes-no /meta=meta-data /output=yes-no /save-parameters=file /set-meta=meta-data /threshold=number /which=choice /window=integer

• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /include-borders=yes-no: whether or not to include borders – values: a boolean: yes, on, true or no, off, false
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to true) – values: a boolean: yes, on, true or no, off, false
• /save-parameters=file: a file in which to save the peak parameters as fit parameters – values: name of a file
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /threshold=number: threshold for the peak Y value – values: a floating-point number
• /which=choice: selects which of minima and/or maxima to find – values: one of: both, max, min
• /window=integer: width of the window – values: an integer

Equivalent to

QSoas> find-peaks /peaks=2 /output=true

### stats – Statistics

stats (/buffers=)datasets /accumulate=meta-data /for-which=code /meta=meta-data /output=yes-no /set-meta=meta-data /stats=stats-names /use-segments=yes-no

• (/buffers=)datasets (default option): datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists
• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /output=yes-no: whether to write data to output file (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here
• /stats=stats-names: writes only the given stats – values: one or more name of statistics (as displayed by stats), separated by ,.
• /use-segments=yes-no: makes statistics segment by segment (defaults to false) – values: a boolean: yes, on, true or no, off, false

stats displays various statistics about the current dataset (or the one specified by the /buffer option). The following statistics are available:

• buffer, rows, columns, segments: the buffer name, and the row, column and segment counts.
• _sum, _average, _var, _stddev: the sum, the average, the variance and the standard deviation of the values of the column.
• all_average, all_sum, yall_average, yall_sum: the average and sum of all columns or just of the y columns
• _first, _last: the first and last values of the column.
• _min, _max: the minimum and maximum values of the column.
• _norm: the norm of the column, that is $\sqrt{\sum {x_i}^2}$.
• y_int: the integral of the Y values over the X values.
• _med, _q10, _q25, _q75, _q90: the median, and the 10th, 25th, 75th and 90th percentiles.
• _delta_min, _delta_max: the min and max values of the difference between two successive values.
• y_a, y_b, y_keff: the linear regression coefficients of the Y column over X: a is the slope and b the value at 0, and keff is the effective first-order rate constant of decay to 0.

In this list, the statistics that start with _ are available for all columns (for instance x_min, y_min, y2_min, etc…), the ones that start with y_ are only available for Y columns (such as y_int, y2_int, etc…), and the other ones are global (buffer, rows, etc.).

These statistics are also available in Ruby code with the name $stats, such as $stats.x_min.

Statistics can be written to the output file with /output=true. If you specify /use-segments=true, the statistics are also displayed segment-by-segment (and written to the output file if /output=true). If you want some meta-data to be written to the output file together with the statistics, provide them as a comma-separated list to the /meta option, or, alternatively, use the /meta option of the output command. See more about that above.

It is possible to run stats on several datasets by using the /buffers= option (possibly combined with the /for-which option), to extract information from a large number of datasets. However, it should be noted that, for most of the cases, using eval can help you produce a much more tailored output.

### cursor – Cursor

cursor (interactive)

Other name: cu

Starts an interative mode (which you can end by pression q or Escape), in which you can position a cursor by left-clicking on the curve, to know its exact X and Y positions.

Using the right mouse button, it is also possible to position a reference point. After that, the command also shows the difference and the ratios in X,Y coordinates between the cursor and the reference point.

Cursor positions can be saved to the output file by pressing the space bar.

Hitting u subtracts the Y value of the current point to the Y values of the dataset and returns. Hitting v divides by the current Y value.

### reglin – Linear regression

reglin (interactive)

Other name: reg

Linear regression. Using the left and right mouse buttons, select a region whose slope is of interest. The terminal shows the $a$ and $b$ parameters (the equation is $ax + b$), and also the effective first order rate constant, ie the $k_{eff}$ parameter of the equation

whose first-order expansion gives the same linear approximation, ie:

Using the space bar it is possible to save the values displayed in the terminal to the output file.

With the key p, the linear regression is used as a baseline for analyzing the first peak next to the regression (in the direction of X values), showing the peak position, amplitude, and the half-peak position. This is useful for analyzing electrochemical data, for obtaining the half-wave potential.

# Fits

QSoas was designed with a particular emphasis on fitting data. It allows complex fits, and in particular multi-dataset fits, when functions with shared parameters are fit to different datasets. Fits fall into two different categories:

• mono-dataset fits, ie fits that apply to one dataset, but that can be applied to several datasets at the same time with shared parameters
• multi-dataset fits, ie fits that need at least two datasets to work

Fits can be used through several commands: for all fits there are a mfit-fit and a sim-fit command, and for mono-dataset fits, there is a fit-fit in addition.

• The fit- command fits a single dataset, when the fit allows that. It takes no argument
• The mfit- command fits several datasets at the same time. It takes the numbers of the datasets it will work on.
• The sim- command takes a saved parameters file and a series of datasets, and pushes the data computed from the parameters on the stack using the X values of the datasets given as arguments (their Y values are not used). The sim commands are described below.

All fits commands share the following options:

• With the /extra-parameters option, on defines additional parameters to the fit, that can be used to define parameters by formulas
• Passing the name of a saved parameters file to the /parameters option preloads the given parameters at the beginning of the fit.
• The /set-from-meta option makes it possible to set a value of parameters from meta-data. For instance, running a fit with /set-from-meta=v=sr will set the value of the parameter v to the value of the meta-data sr (if present). Specify more of those by separating them with commas.
• The /debug option is for debugging fits or fit engines. It takes a debug level: 0 (no debug info), 1 and 2.
• Using the /engine, one can pre-select the fit engine for fitting (exactly like choosing it in the dialog box)
• The /window-title= option makes it possible to select the title of the fit window, which can be useful if you’re running several fits at the same time on the same computer.

In addition to these commands, QSoas provides commands to combine fits together, to fit derivatives of the signals, and to define fits with distributions of parameters.

The fit engines now feature an “expert”, command-line, mode, which makes it possible to run fits automatically, to set parameters using expressions, to save “trajectories”, i.e. series of starting parameters -> ending parameters, and to explore the parameter space using various explorers. These features are accessible through the following options of the fit- and mfit- commands:

• /expert=true activates the expert mode and allows typing commands;
• /script= makes it possible to run a script file at fit startup time;
• /arg1=, /arg2= and /arg3= can be used to give arguments to the script specified by /script=.

The commands for the command-line interface are described below.

## Sim commands

The sim- commands are used for non-interactive computations linked to fits. They all take a parameters file and a series of datasets. What they do depend on the value of the /operation= option.

• With /operation=compute, the default, the command computes the $y = f(x)$ values predicted from the fit, as if one had used the mfit- command, loaded the parameters file, and used “Push to stack”.
• With /operation=reexport, the command does the same as loading the parameters and then “Export to output file with errors”.
• /operation=subfunctions is like compute, but the fit subfunctions are also computed, they are added as additional Y columns.

### ruby-run – Ruby load

ruby-run file (fit command)

• file: Ruby file to load – values: name of a file

Like the other ruby-run, loads and run a Ruby code file.

### save-history – Save history

save-history file /overwrite=yes-no (fit command)

• file: Output file – values: name of a file
• /overwrite=yes-no: If true, overwrite without prompting – values: a boolean: yes, on, true or no, off, false

Like the other save-history, saves all the commands typed into the fit window to the given file.

### run – Run commands

run file… /add-to-history=yes-no /cd-to-script=yes-no /error=choice /only-if=code /silent=yes-no (fit command)

Other name: @

• file…: First is the command files, following are arguments – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /add-to-history=yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /cd-to-script=yes-no: If on, automatically change the directory to that oof the script – values: a boolean: yes, on, true or no, off, false
• /error=choice: Behaviour to adopt on error – values: one of: abort, delete, except, ignore
• /only-if=code: If specified, the script is only run when the condition is true – values: a piece of Ruby code
• /silent=yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean: yes, on, true or no, off, false

Like the other run command, runs the given script. The options and arguments are interpreted the same way as the other run command.

### run-for-each – Runs a script for several arguments

run-for-each script arguments… /add-to-history=yes-no /arg2=file /arg3=file /arg4=file /arg5=file /arg6=file /error=choice /range-type=choice /silent=yes-no (fit command)

• script: The script file – values: name of a file
• arguments…: All the arguments for the script file to loop on – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /add-to-history=yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /arg2=file: Second argument to the script – values: name of a file
• /arg3=file: Third argument to the script – values: name of a file
• /arg4=file: Fourth argument to the script – values: name of a file
• /arg5=file: Fifth argument to the script – values: name of a file
• /arg6=file: Sixth argument to the script – values: name of a file
• /error=choice: Behaviour to adopt on error – values: one of: abort, delete, except, ignore
• /range-type=choice: If on, transform arguments into ranged numbers – values: one of: lin, log
• /silent=yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean: yes, on, true or no, off, false

Like the other run-for-each, runs a script for several values of its first parameter.

### verify – Verify

verify expression (fit command)

• expression: the expression to evaluate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

Does the same as the general verify command.

### fit – Fit

fit /iterations=integer /trace-file=file (fit command)

• /iterations=integer: the maximum number of iterations of the fitting process – values: an integer
• /trace-file=file: a file to save the details of the fitting process – values: name of a file

Runs the fit, optionally changing the number of maximum fit iterations through the /iterations option.

### linear-prefit – Linear prefit

linear-prefit /just-look=yes-no /threshold=number (fit command)

• /just-look=yes-no: if true, just find the linear parameters, do not adjust – values: a boolean: yes, on, true or no, off, false
• /threshold=number: threshold under which to consider linearity – values: a floating-point number

This command determines which parameters are linear in the current fit, and runs a linear least square minimization procedure on them. This can greatly help with convergence in some cases, or simply greatly speed it up.

With /just-look=true, this command doesn’t modify the fit parameters, but just displays in the terminal which parameters were found to be linear.

### commands – Commands

commands (fit command)

Like the other commands command, list the commands available from within the fit prompt.

### system – System

system command… /shell=yes-no /timeout=integer (fit command)

• command…: Arguments of the command – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /shell=yes-no: use shell (on by default on Linux/Mac, off in windows) – values: a boolean: yes, on, true or no, off, false
• /timeout=integer: timeout (in milliseconds) – values: an integer

Like the other system command, runs an external program.

### push – Push to stack

push /flags=flags /recompute=yes-no /residuals=yes-no /reversed=yes-no /set-meta=meta-data /style=style /subfunctions=yes-no (fit command)

• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /recompute=yes-no: whether or not to recompute the fit (on by default) – values: a boolean: yes, on, true or no, off, false
• /residuals=yes-no: if true, push the residuals rather than the computed values – values: a boolean: yes, on, true or no, off, false
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green
• /subfunctions=yes-no: whether the subfunctions are also exported or not – values: a boolean: yes, on, true or no, off, false

Pushes the computed function to the stack, like the fit sim- command would do.

## Parameter space exploration

QSoas now provides facilities for parameter space exploration. The idea is that QSoas will attempt several (many!) fits with different starting parameters. There are different explorers that choose new starting parameters in a different way, but all explorers can be used this way:

QSoas.fit> monte-carlo-explorer A_inf:-10..10
Selected parameter space explorer: 'monte-carlo'
Setting up monte-carlo explorator with: 20 iterations and 50 fit iterations
* A_inf[#0]: -10 to 10 lin
QSoas.fit> iterate-explorer

The first command sets up the explorer, here the monte-carlo-explorer, and the second iterates the explorer, chosing new parameters and running the fits, until the number of iterations specified by the explorer is finished.

### monte-carlo-explorer – Monte Carlo

monte-carlo-explorer parameters… /fit-iterations=integer /gradual-datasets=integer /iterations=integer /reset-frequency=integer (fit command)

• parameters…: Parameter specification – values: several words, separated by ‘’
• /fit-iterations=integer: Maximum number of fit iterations – values: an integer
• /gradual-datasets=integer: Number of starting datasets when doing gradual exploration – values: an integer
• /iterations=integer: Number of monte-carlo iterations – values: an integer
• /reset-frequency=integer: If > 0 reset to the best parameters every that many iterations – values: an integer

Sets up a “Monte Carlo” exploration, i.e. an exploration in which the initial parameters are chosen uniformly within given segments.

QSoas.fit> monte-carlo-explorer A_inf:-10..10 tau_1:1e-2..1e2,log

This command sets up the exploration, with the parameter A_inf uniformly distributed between -10 and 10, and tau_1 with a log uniform distribution between 1e-2 and 1e2. The other parameters are left untouched from the previous fit iteration.

If /reset= is used to specify a number above 0, all the other parameters of the fit (the ones that are not listed in the command-line) will be reset to the values they had at the end of the current best fit every that many explorator iteration.

### linear-explorer – Linear ramp

linear-explorer parameters… /fit-iterations=integer /iterations=integer (fit command)

• parameters…: Parameter specification – values: several words, separated by ‘’
• /fit-iterations=integer: Maximum number of fit iterations – values: an integer
• /iterations=integer: Number of monte-carlo iterations – values: an integer

Linearly (or logarithmically) varies the parameter between the given range:

QSoas.fit> linear-explorer A_inf:-10..10

This command runs a number of fits with the initial value of A_inf ranging from -10 to +10. You can specify several parameters this way, they will be varied simultaneously (i.e. they will be linearly correlated). Adding ,log switches to an exponential progression.

### iterate-explorer – Iterate explorer

iterate-explorer (/script=)file /arg1=file /arg2=file /improved-script=file /just-pick=yes-no /linear-prefit=yes-no /pre-script=file (fit command)

• (/script=)file (default option): script file run after the iteration – values: name of a file
• /arg1=file: First argument to the scripts – values: name of a file
• /arg2=file: Second argument to the scripts – values: name of a file
• /improved-script=file: script file run whenever the best residuals have improved – values: name of a file
• /just-pick=yes-no: If true, then just picks the next initial parameters, don’t fit, don’t iterate – values: a boolean: yes, on, true or no, off, false
• /linear-prefit=yes-no: If true, runs a linear pre-fit on before running the real fit – values: a boolean: yes, on, true or no, off, false
• /pre-script=file: script file run after choosing the parameters and before choosing the file – values: name of a file

Runs all the iterations of the previously setup explorer. If /just-pick=true is specified, then just picks the parameters once, do not run the iterations nor any fit.

The /pre-script, /script and /improved-script options specify the names of script files that will be run either after picking the parameters but before running the fit, after the fit, or every time the best residuals are improved. They can be given additional arguments through the /arg1 and /arg2 options.

# Computation/simulations functions

The commands in this section generate data “from scratch”, though most require a dataset as a starting point to provide X values. You can create a dataset for those commands using generate-dataset.

## Evaluation functions

QSoas provides various functions to evaluate the result of mathematical operations.

### eval – Ruby eval

eval codes… (/buffers=)datasets /accumulate=meta-data /for-which=code /meta=meta-data /modify-meta=yes-no /output=yes-no /set-meta=meta-data /use-dataset=yes-no

Other name: eval-cmd

• codes…: Any ruby code – values: several pieces of Ruby code
• (/buffers=)datasets (default option): Datasets to run eval on – values: comma-separated lists of datasets in the stack, see dataset lists
• /accumulate=meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here
• /for-which=code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
• /meta=meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data
• /modify-meta=yes-no: Reads backs the modifications made to the $meta hash (implies /use-dataset=true) – values: a boolean: yes, on, true or no, off, false • /output=yes-no: whether to write data to output file (defaults to false) – values: a boolean: yes, on, true or no, off, false • /set-meta=meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, or a->b specifications, see here • /use-dataset=yes-no: If on (the default) and if there is a current dataset, the$meta and $stats hashes are available – values: a boolean: yes, on, true or no, off, false Evaluates the given code as a Ruby expression: QSoas> eval 2*3 => 6 It runs in the same environment as the apply-formula and the custom fits (excepted, of course, that there are no x and y variables). It can be useful to check that a function has been correctly defined in a file loaded by ruby-run. Moreover, if /use-dataset is true (the default), it can also access the meta-data and statistics of the (as apply-formula with /use-meta=true and /use-stats=true) of the dataset: QSoas> generate-dataset 0 10 x**3 QSoas> eval$stats.y_int
=> 2500.002505007509

You can also use this command as a calculator.

Starting from version 3.1, eval can be used much more effectively for data extraction from a number of datasets. It can work on several datasets in a row using the classical /buffer and /for-which options, and can use several formulas. For instance:

QSoas> eval $stats.x_max$stats.y_int /buffers=flagged:my-data /output=true

will write to the output file the max x value and the corresponding integration over Y of all the datasets flagged my-data. To ease the parsing afterwards, the values can be given a name, which will be used as a column name for the output file (and the accumulator if you chose this):

QSoas> eval xmax:$stats.x_max my_int:$stats.y_int /buffers=flagged:my-data /output=true

This is now the recommended way to extract all kind of information from datasets.

#### /modify-meta=true

With the option /modify-meta=true, it is possible to modify the meta-data of the dataset by changing the values of the $meta dictionnary. It is possible to add new values. For instance, the following command: QSoas> eval /modify-meta=true$meta.yyy=3

is equivalent to using set-meta this way:

QSoas> set-meta yyy 3

This option also makes it possible to modify the row and column names by modifying the $row_names and $col_names variables:

### find-root – Finds a root

find-root formula seed (/max=)number

• formula: An expression of 1 variable (not an equation !) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• seed: Initial X value from which to search – values: a floating-point number
• (/max=)number (default option): If present, uses dichotomy between seed and max – values: a floating-point number

Find the root of the given x-dependent expression using an iterative algorithm, using seed as the initial value. If the /max option is specified, then the search proceeds using dichotomy between the two values (seed and max).

QSoas> find-root 'x**2 - 3' 1
Found root at: 1.73205

Do not use a equal sign. The returned value is that for which the expression equates 0.

### integrate-formula – Integrate expression

integrate-formula formula a b /integrator=choice /prec-absolute=number /prec-relative=number /subdivisions=integer

• formula: An expression of 1 variable (not an equation !) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• a: Left bound of the segment – values: a floating-point number
• b: Right bound of the segment – values: a floating-point number
• /integrator=choice: The algorithm used for integration – values: one of: gauss15, gauss21, gauss31, gauss41, gauss51, gauss61, qng
• /prec-absolute=number: Absolute precision required for integration – values: a floating-point number
• /prec-relative=number: Relative precision required for integration – values: a floating-point number
• /subdivisions=integer: Maximum number of subdivisions in the integration algorithm – values: an integer

Computes the integral of the given expression of x between bounds a and b:

QSoas> integrate-formula x**2 10 22
Integral value: 3216	estimated error: 3.57048e-11	 in 31 evaluations over 1 intervals 

The available integrators are gaussi (with i ranging from 15 to 61), which correspond to adaptive Gauss-Kronrod integrators (starting with i evaluations), and qng, which is a non-adaptive Gauss-Kronrod integrator. See the documentation of the GNU Scientific Library for more information.

### mintegrate-formula – Integrate expression

mintegrate-formula formula a b /integrator=choice /max-evaluations=integer /prec-absolute=number /prec-relative=number

• formula: An expression of x and z – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• a: Lower Z value – values: a floating-point number
• b: Upper Z value – values: a floating-point number
• /integrator=choice: The algorithm used for integration – values: one of: akima, csplines, gk15, gk21, gk31, gk41, gk51, gk61, naive
• /max-evaluations=integer: Maximum number of function evaluations – values: an integer
• /prec-absolute=number: Absolute precision required for integration – values: a floating-point number
• /prec-relative=number: Relative precision required for integration – values: a floating-point number

This command takes a function of $x$ and $z$, two numbers, $a$ and $b$, and computes, for each value of $x$ of the current dataset, the integral:

This command uses the same algorithms for integration as the fits created by define-distribution-fit.

### generate-dataset – Generate dataset

generate-dataset start end (/formula=)words /columns=integer /flags=flags /log=yes-no /name=text /number=integer /reversed=yes-no /samples=integer /set-meta=meta-data /style=style

Other name: generate-buffer

• start: The first X value – values: a floating-point number
• end: The last X value – values: a floating-point number
• (/formula=)words (default option): Formula to generate the Y values – values: several words, separated by ‘’
• /columns=integer: number of columns of the generated datasets – values: an integer
• /flags=flags: Flags to set on the newly created datasets – values: a comma-separated list of flags
• /log=yes-no: uses logarithmically spaced X values – values: a boolean: yes, on, true or no, off, false
• /name=text: The name of the newly generated bufffers (may include a %d specification for the number) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /number=integer: generates that many datasets – values: an integer
• /reversed=yes-no: Push the datasets in reverse order – values: a boolean: yes, on, true or no, off, false
• /samples=integer: number of data points – values: an integer
• /set-meta=meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements
• /style=style: Style for the displayed curves – values: one of: brown-green, red-blue, red-green, red-to-blue, red-yellow-green

Generates a dataset with samples samples (by default 1000) uniformly spaced between start and end.

If formula is provided, it sets Y values according to this formula (else Y is take equal to X).

QSoas> generate-dataset -10 10 sin(x)

## Simulation functions

### kinetic-system – Kinetic system evolver

kinetic-system reaction-file parameters /adaptive=yes-no /annotate=yes-no /dump=yes-no /min-step-size=number /prec-absolute=number /prec-relative=number /step-size=number /stepper=stepper /sub-steps=integer

• reaction-file: File describing the kinetic system – values: name of a file
• parameters: Parameters of the model – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /adaptive=yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean: yes, on, true or no, off, false
• /annotate=yes-no: If on, a last column will contain the number of function evaluation for each step (default false) – values: a boolean: yes, on, true or no, off, false
• /dump=yes-no: if on, prints a description of the system rather than solving (default: false) – values: a boolean: yes, on, true or no, off, false
• /min-step-size=number: minimum step size for the stepper – values: a floating-point number
• /prec-absolute=number: absolute precision required – values: a floating-point number
• /prec-relative=number: relative precision required – values: a floating-point number
• /step-size=number: initial step size for the stepper – values: a floating-point number
• /stepper=stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of: bsimp, msadams, msbdf, rk1imp, rk2, rk2imp, rk4, rk4imp, rk8pd, rkck, rkf45
• /sub-steps=integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer

Simulates the evolution over time of the kinetic system given in the reaction-file (see the section kinetic system for the syntax of the reaction files).

This commands will use the current dataset as a source for X values.

The result is a multi-column dataset containing the concentration of all the species in the different columns.

parameters is a list of assignments evaluated at the beginning of the time evolution to set the parameters of the system. (all parameters not set this way default to 0). This list is evaluated as Ruby code, so you should separate the assignments with ;.

For instance, if the reaction file (system.sys) contains:

A <=>[ki][ka] I

You can run the following commands to simulate the time evolution of the system with initial concentration of A equal to 1 (the parameter c0_A), of I equal to 0 (the parameter c0_I, here not specified so assumed to be 0) and with ki and ka equal to 1:

QSoas> generate-dataset 0 10
QSoas> kinetic-system system.sys 'c0_A = 1;ka = 1; ki = 1'

### ode – ODE solver

ode file (/parameters=)text /adaptive=yes-no /annotate=yes-no /dump=yes-no /min-step-size=number /prec-absolute=number /prec-relative=number /step-size=number /stepper=stepper /sub-steps=integer

• file: File containing the system – values: name of a file
• (/parameters=)text (default option): Values of the parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• /adaptive=yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean: yes, on, true or no, off, false
• /annotate=yes-no: If on, a last column will contain the number of function evaluation for each step – values: a boolean: yes, on, true or no, off, false
• /dump=yes-no: If on, do not integrate, just dumps the parse contents of the ODE file – values: a boolean: yes, on, true or no, off, false
• /min-step-size=number: minimum step size for the stepper – values: a floating-point number
• /prec-absolute=number: absolute precision required – values: a floating-point number
• /prec-relative=number: relative precision required – values: a floating-point number
• /step-size=number: initial step size for the stepper – values: a floating-point number
• /stepper=stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of: bsimp, msadams, msbdf, rk1imp, rk2, rk2imp, rk4, rk4imp, rk8pd, rkck, rkf45
• /sub-steps=integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer

ode solves ordinary differential equations. The equation definition file is structured in three parts, separated by at least one fully blank line, the last one being optional.

The first section defines the “initial conditions”; there are as many integrated variables as there are lines in this section. This section is only evaluated once at the beginning of the integration.

The second section defines the derivatives; they are evaluated several times for each time step.

The third is optional and is described further below.

Here is the contents of the file (say sine.ode) one would use to obtain $\sin t$ and $\cos t$ as solutions.

sin = 0
cos = 1

d_sin = cos
d_cos = -sin

Important Make sure that at least one fully blank line separates the definition of the initial values and the definition of the derivatives. Make sure also that to each variable defined in the first section corresponds a derivative in the second, starting with d_.

After running the commands:

QSoas> generate-dataset 0 10
QSoas> ode sine.ode

One has a dataset with one X column (representing the $t$ values), and two Y columns, $\sin t$ and $\cos t$ (in the order in which they are given in the “initial conditions” section).

The optional third section can be used to control the exact output of the program. The above example can be completed thus:

sin = 0
cos = 1

d_sin = cos
d_cos = -sin

[sin, cos, sin**2 + cos**2]

Using this gives 3 Y columns: $\sin t$, $\cos t$ and $\sin^2 t + \cos^2 t$. The latter should hopefully be very close to 1.

Details of the integrations procedures can be tweaked using the parameters:

• /stepper: the ODE stepper algorithm. You can find more about them in the GSL documentation. rkf45 is the standard Runge-Kutta-Feldberg integrator, and is the default choice. If QSoas complains that it has difficulties to integrate and that you should try implicit solvers (because your system is too stiff, then try rk4imp, bsimp, msadams or msbdf.
• /prec-relative and /prec-absolute control the precision. A step will be deemed precise enough if the error estimate is smaller than either the relative precision or the absolute precision
• /adaptive controls whether an adaptive step size is used (the values of $t$ in the resulting dataset are always those asked, but there may be more intermediate steps). You should seldom need to turn it off.

If /annotate is on, a last column is added that contains the number of the evaluations of derivatives for each step (useful for understanding why an integration takes so long, for instance).

The system of equations may contain undefined variables; one could have for instance used:

d_sin = omega * cos
d_cos = -omega * sin

Their values are set to 0 by default. You can change their values using the /parameters option:

QSoas> ode sine.ode /parameters="omega = 3"

# Scripting facilities

QSoas provides facilities for scripting, ie running commands unattended, for instance for preparing series of data files for fitting or further use. The following commands are useful only in this context.

## Scripting commands

### run – Run commands

run file… /add-to-history=yes-no /cd-to-script=yes-no /error=choice /only-if=code /silent=yes-no

Other name: @

• file…: First is the command files, following are arguments – values: one or more files. Can include wildcards such as *, [0-4], etc…
• /add-to-history=yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean: yes, on, true or no, off, false
• /cd-to-script=yes-no: If on, automatically change the directory to that oof the script – values: a boolean: yes, on, true or no, off, false
• /error=choice: Behaviour to adopt on error – values: one of: abort, delete, except, ignore
• /only-if=code: If specified, the script is only run when the condition is true – values: a piece of Ruby code
• /silent=yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean: yes, on, true or no, off, false

Run commands saved in a file. If a compulsory argument is missing, QSoas will prompt the user.

Arguments following the name of the script are passed to the script as “special variables” ${1}, and ${2} etc.

Imagine you are often doing the same processing a given type of data files, say, simply filtering them. You just have to write a script process.cmd containing:

load ${1} auto-filter-fft  And run it this way: QSoas> run process.cmd data_file.dat or QSoas> @ process.cmd data_file.dat If you use run regularly, you may be interested in the other scripting commands, such as run-for-each, run-for-datasets and startup-files If the /only-if=condition option is specified, the script will only be executed if the condition is true. The condition has the same behaviour as that for the verify command. #### Advanced use of script parameters If you want to manipulate the arguments or provide defaut values for some of them, you can use the following syntax: • ${2%%suffix} will be replaced by parameter 2 with the suffix “suffix” removed, or simply parameter 2 if it does not end with “suffix”.
• ${2##prefix} will be replaced by parameter 2 with the prefix “prefix” removed, or simply parameter 2 if it does not start with “prefix”. • ${2:-value}: this will be replaced by parameter 2 if it has been provided to the script, or by “value” if it has not been provided.
• ${2:+value}: this will be replaced by “value” if parameter 2 has been provided to the script, or by nothing if that is not the case (the value of parameter 2 is not used). • ${2?yes:no}: this will be replaced by “yes” if parameter 2 has been provided to the script, or by “no” if that is not the case.

#### Error handling

It is possible to change how the script handles errors using the /error option, which can take the following values:

• abort (the default behaviour): when a command in the script fails, the script stops executing, and the control comes back to either the command-line or the calling script. In the latter case, this behaviour is not considered as an error (i.e. the calling script does not abort);
• ignore: if a command in the script fails, the script keeps on running;
• except: as in abort, but this is considered as an error, so this may also stop the calling script;
• delete: as in abort, but all the datasets generated during the execution of this script are removed from the stack.

### let – Define a named parameter

let name value

• name: the name of the parameter – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
• value: the value of the parameter – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “

let makes it possible to define “named parameters” that can be reused inside scripts. For instance:

QSoas> let max 100
QSoas> generate-dataset 0 ${max} They can also be used in more elaborated fashions like the normal script parameters, see there. Warning parameter expansion only works inside scripts. Typing directly the above commands in the command prompt will yield an error. ### startup-files – Startup files startup-files (/add=)file /rm=integer /run=yes-no • (/add=)file (default option): adds the given startup file – values: name of a file • /rm=integer: removes the numbered file – values: an integer • /run=yes-no: if on, runs all the startup files right now (off by default) – values: a boolean: yes, on, true or no, off, false This command instructs QSoas to execute command files at startup. Without options, it displays the list of command files that QSoas will read at the next startup. Files given to the /add options are added at the end of the list. To remove a file from the list, obtain its number by running startup-files without any option, then use startup-files again with the option /rm=. You can re-run all startup files by running: QSoas> startup-files /run=true ### run-for-each – Runs a script for several arguments run-for-each script arguments… /add-to-history=yes-no /arg2=file /arg3=file /arg4=file /arg5=file /arg6=file /error=choice /range-type=choice /silent=yes-no • script: The script file – values: name of a file • arguments…: All the arguments for the script file to loop on – values: one or more files. Can include wildcards such as *, [0-4], etc… • /add-to-history=yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean: yes, on, true or no, off, false • /arg2=file: Second argument to the script – values: name of a file • /arg3=file: Third argument to the script – values: name of a file • /arg4=file: Fourth argument to the script – values: name of a file • /arg5=file: Fifth argument to the script – values: name of a file • /arg6=file: Sixth argument to the script – values: name of a file • /error=choice: Behaviour to adopt on error – values: one of: abort, delete, except, ignore • /range-type=choice: If on, transform arguments into ranged numbers – values: one of: lin, log • /silent=yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean: yes, on, true or no, off, false Runs the given script file successively for each argument given. For instance, running: QSoas> run-for-each process-my-file.cmds file1 file2 file3 Is equivalent to running successively QSoas> @ process-my-file.cmds file1 QSoas> @ process-my-file.cmds file2 QSoas> @ process-my-file.cmds file3 The arguments may not be file names, although automatic completion will only complete file names. If the script you want to run requires more than one argument, you can specify them (for all the runs) using the options /arg2, /arg3 and so on: QSoas> run-for-each process-my-file.cmds /arg2=other file1 file2  Is equivalent to running: QSoas> @ process-my-file.cmds file1 other QSoas> @ process-my-file.cmds file2 other If you specify either /range-type=lin or /range-type=log, the parameters are interpreted differently, and are expected to be of the type 1..10:20, which means 20 numbers between 1 and 10 (inclusive), that are spaced either linearly or logarithmically, depending on the value of the option. The /error= option controls how the scripts handle errors. See run for more information. ### run-for-datasets – Runs a script for several datasets run-for-datasets script datasets… /add-to-history=yes-no /arg1=file /arg2=file /arg3=file /arg4=file /arg5=file /arg6=file /error=choice /silent=yes-no • script: The script file – values: name of a file • datasets…: All the arguments for the script file to loop on – values: comma-separated lists of datasets in the stack, see dataset lists • /add-to-history=yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean: yes, on, true or no, off, false • /arg1=file: First argument to the script – values: name of a file • /arg2=file: Second argument to the script – values: name of a file • /arg3=file: Third argument to the script – values: name of a file • /arg4=file: Fourth argument to the script – values: name of a file • /arg5=file: Fifth argument to the script – values: name of a file • /arg6=file: Sixth argument to the script – values: name of a file • /error=choice: Behaviour to adopt on error – values: one of: abort, delete, except, ignore • /silent=yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean: yes, on, true or no, off, false Runs the given script file for each of the datasets given. Before each invocation of the script, the dataset is pushed back to the top of the stack, as if by fetch. The /error= option controls how the scripts handle errors. See run for more information. ### noop – No op noop (/*=)words • (/*=)words (default option): Ignored options – values: several words, separated by ‘’ Does nothing (no operation). This command can be combined with the advanced argument uses described in run to conditionally execute some commands. ### pause – Pause pause (/message=)text /time=number • (/message=)text (default option): the message to display – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ • /time=number: time to pause for, in seconds – values: a floating-point number This commands temporarily stops the execution of a script either displaying the message given or for a certain time (if the /time= option is used). ## Non-interactive commands In addition to purely scripting commands, many commands do not require user interaction, provided all their arguments are given. They are listed here: # Mathematical formulas using Ruby QSoas internally uses Ruby (or more precisely its embedded version, mruby) for the interpretation of all formulas. This means in particular that all formulas must be valid ruby code. Basically, the Ruby syntax ressembles that of other symbolic evaluation programs (it is quite close to the one from gnuplot), with the following restrictions: • Parameter names cannot start with an uppercase letter, as those have a special meaning to the Ruby interpreter: anything that starts with an uppercase letter is assumed to be a constant. • Don’t abbreviate floating point numbers: 2. and .4 are invalid, use 2.0 and 0.4 instead. • Case matters: Pi is $\pi$, while pi is nothing defined. • Exponentiation is done with the ** operator. The ^ operator is used for binary XOR. • Logical OR is done with the || operator and logical AND with the && operator. The single-letter versions, | and & are binary operators and will not work as you intend. For instance: QSoas> eval 2+2 => 4 QSoas> eval 2**8 => 256 QSoas> eval sin(0.5*PI) => 1 QSoas> eval sin(0.25*PI) => 0.70710678118655 The last examples take advantage of the definition of the constant PI. ## Define global variables Using Ruby, it is possible to define local and global variables. Local variables have to start with a lowercase and they can just be defined by using an = sign. For instance: QSoas> eval x=2;x**8 => 256 In this example, a variable called x is defined to be equal to 2; it’s 8th power is computed afterwards. The ; separates two instructions. The value of x is lost as soon as the command is finished: QSoas> eval x Error: A ruby exception occurred: (eval):1: undefined method 'x' (NoMethodError) To create persistent storage, you can use a global variable, which looks like a local one excepted that its name must be preceded by a $ sign:

QSoas> eval $x=2 => 2 QSoas> eval$x**8
=> 256

=> 2
QSoas> eval $x>3 => false The normal comparisons are available: <, >, <=, >=. To test for equality, use ==. If you need to chain several tests, use the following operators: • logical or: ||, which will be true if either condition is true; • logical and: &&, which will be true only if both conditions are true. For instance: QSoas> eval ($x>5)||($x<3) => true QSoas> eval ($x>5)&&($x<3) => false ## Using ruby code Ruby code can be used in several contexts: • in eval, one can make “general computations”, which can refer to “global properties” of the current dataset, like its meta data or statistics (or of other datasets too); • in the /for-which options that take a boolean expression to select datasets from a list, using their “global properties”; • in apply-formula, the formula is applied to each row of a dataset, possibly modifying the values; • in strip-if the formula is also applied to each row of a dataset, but this time to evaluate if a row is kept (false) or removed (true); • in the arb fits to specify the function (of the variable x) to fit; • in many other places too. ## Special variables Most ruby expressions can make use of dataset information, such as meta-data or statistics (see the documentation of the specific command for more information about how to make this available): • the special variable $stats allow access to the statistics, as given by stats.
• the special variable $meta gives access to the meta-data. For instance, to subtract the average to the y column: QSoas> apply-formula y-=$stats.y_average

To show the name of the original file of the current dataset.

QSoas> eval $meta.original_file Auto completion is able to complete the $stats or \$meta completions.

## Complex numbers

QSoas includes now a limited support for handling complex numbers. While the contents of the datasets can only be series of real numbers, all the ruby formulas can define and use complex numbers. You can create complex numbers using the Cplx(real, imag) function, or just using I:

QSoas> eval Cplx(1,2)
=> (1+2*I)
QSoas> eval (I+2)**2
=> (3+4*I)

The trigonometric functions accept complex numbers as arguments:

QSoas> eval exp(2+PI*I/4)
=> (5.22485+5.22485*I)

To convert back to real values, you can use the .real or the .imag methods to get the real or imaginary parts, or the abs function which returns the module of the complex number, or the arg function which returns the argument (all this in radians).

For instance, you can generate a spiral using:

QSoas> generate-dataset -20 20
QSoas> apply-formula z=exp((0.1+PI*I)*x);x=z.real;y=z.imag
Applying formula 'z=exp((0.1+PI*I)*x);x=z.real;y=z.imag' to buffer generated.dat

## Special functions

In addition to standard mathematical functions from the Math module (that contains, among others, the error function erf), the following special functions are available:

•  abs(x): $$\left x\right$$, works on complex numbers too
• airy_ai(x): Airy Ai function $AiryAi(x)$. Precision to about $10^{-7}$. Other variants available: airy_ai_fast is faster, (precision $5\times10^{-4}$) and airy_ai_double slower, (precision $2\times10^{-16}$). (more information there)
• airy_ai_deriv(x): First derivative of Airy Ai function $\mathrm{d}AiryAi(x)/\mathrm{d}x$. Precision to about $10^{-7}$. Other variants available: airy_ai_deriv_fast is faster, (precision $5\times10^{-4}$) and airy_ai_deriv_double slower, (precision $2\times10^{-16}$). (more information there)
• airy_bi(x): Airy Bi function $AiryBi(x)$. Precision to about $10^{-7}$. Other variants available: airy_bi_fast is faster, (precision $5\times10^{-4}$) and airy_bi_double slower, (precision $2\times10^{-16}$). (more information there)
• airy_bi_deriv(x): First derivative of Airy Bi function $\mathrm{d}AiryBi(x)/\mathrm{d}x$. Precision to about $10^{-7}$. Other variants available: airy_bi_deriv_fast is faster, (precision $5\times10^{-4}$) and airy_bi_deriv_double slower, (precision $2\times10^{-16}$). (more information there)
• arg(x): $\arg x$, the argument of the complex number
• atanc(x): $\frac{\tan^{-1} x}{x}$
• atanhc(x): $\frac{\tanh^{-1} x}{x}$
• bessel_j0(x): Regular cylindrical Bessel function of 0th order, $J_0(x)$ (more information there)
• bessel_j1(x): Regular cylindrical Bessel function of first order, $J_1(x)$ (more information there)
• bessel_jn(x,n): Regular cylindrical Bessel function of n-th order, $J_n(x)$ (more information there)
• bessel_y0(x): Irregular cylindrical Bessel function of 0th order, $Y_0(x)$ (more information there)
• bessel_y1(x): Irregular cylindrical Bessel function of first order, $Y_1(x)$ (more information there)
• bessel_yn(x,n): Irregular cylindrical Bessel function of n-th order, $Y_n(x)$ (more information there)
• clausen(x): Clausen integral, $Cl_2(x) = -\int_0^x \mathrm{d}t \log(2\sin(t/2))$ (more information there)
• dawson(x): Dawson integral, $\exp(-x^2)\int_{0}^{x}\exp(t^2)\mathrm{d} t$
• debye_1(x): Debye function of order 1, $D_1 = (1/x) \int_0^x \mathrm{d}t (t/(e^t - 1))$ (more information there)
• debye_2(x): Debye function of order 2, $D_2 = (2/x^2) \int_0^x \mathrm{d}t (t^2/(e^t - 1))$ (more information there)
• debye_3(x): Debye function of order 3, $D_3 = (3/x^3) \int_0^x \mathrm{d}t (t^3/(e^t - 1))$ (more information there)
• debye_4(x): Debye function of order 4, $D_4 = (4/x^4) \int_0^x \mathrm{d}t (t^4/(e^t - 1))$ (more information there)
• debye_5(x): Debye function of order 5, $D_5 = (5/x^5) \int_0^x \mathrm{d}t (t^5/(e^t - 1))$ (more information there)
• debye_6(x): Debye function of order 6, $D_6 = (6/x^6) \int_0^x \mathrm{d}t (t^6/(e^t - 1))$ (more information there)
• dilog(x): The dilogarithm, $Li_2(x) = - \Re \left(\int_0^x \mathrm{d}s \log(1-s) / s\right)$ (more information there)
• exp(x): $\exp x$, works on complex numbers too
• expint_e1(x): Exponential integral $E_1(x) = \int_{x}^{\infty} \frac{\exp -t}{t} \mathrm{d} t$
• expint_e2(x): Exponential integral $E_2(x) = \int_{x}^{\infty} \frac{\exp -t}{t^2} \mathrm{d} t$
• expint_en(x,n): Exponential integral $E_n(x) = \int_{x}^{\infty} \frac{\exp -t}{t^n} \mathrm{d} t$
• fermi_dirac_0(x): Complete Fermi-Dirac integral (index 0), $F_0(x) = \ln(1 + e^x)$ (more information there)
• fermi_dirac_1(x): Complete Fermi-Dirac integral (index 1), $F_1(x) = \int_0^\infty \mathrm{d}t (t /(\exp(t-x)+1))$ (more information there)
• fermi_dirac_2(x): Complete Fermi-Dirac integral (index 2), $F_2(x) = (1/2) \int_0^\infty \mathrm{d}t (t^2 /(\exp(t-x)+1))$ (more information there)
• fermi_dirac_3half(x): Complete Fermi-Dirac integral (index 3/2) (more information there)
• fermi_dirac_half(x): Complete Fermi-Dirac integral (index 1/2) (more information there)
• fermi_dirac_m1(x): Complete Fermi-Dirac integral (index -1), $F_{-1}(x) = e^x / (1 + e^x)$ (more information there)
• fermi_dirac_mhalf(x): Complete Fermi-Dirac integral (index -1/2) (more information there)
• fermi_dirac_n(x,n): Complete Fermi-Dirac integral of index $n$, $F_n(x) = \frac{1}{\Gamma(n+1)} \int_0^\infty \mathrm{d} t \frac{t^n}{\exp(t-x) + 1}$ (more information there)
• gamma(x): The Gauss gamma function $\Gamma(x) = \int_0^{\infty} dt t^{x-1} \exp(-t)$ (more information there)
• gamma_inc(a,x): Incomplete gamma function $\Gamma(a,x) = \int_x^\infty dt t^{a-1} \exp(-t)$ (more information there)
• gamma_inc_p(a,x): Complementary normalized incomplete gamma function $\Gamma_P(a,x) = 1 - \Gamma_Q(a,x) = 1 - \frac{1}{\Gamma(a)}\int_x^\infty dt t^{a-1} \exp(-t)$ (more information there)
• gamma_inc_q(a,x): Normalized incomplete gamma function $\Gamma_Q(a,x) = \frac{1}{\Gamma(a)}\int_x^\infty dt t^{a-1} \exp(-t)$ (more information there)
• gaussian(x,sigma): Normalized gaussian: $p(x,\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp (-x^2 / 2\sigma^2)$
• gsl_erf(x): Error function $\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x \mathrm{d}t \exp(-t^2)$ – GSL version (more information there)
• gsl_erfc(x): Complementary error function $\mathrm{erfc}(x) = 1 - \mathrm{erf}(x)$ (more information there)
• hyperg_0F1(c,x): Hypergeometric function ${}_0F_1$ (more information there)
• hyperg_1F1(a,b,x): Hypergeometric function ${}_1F_1(a,b,x)$ (more information there)
• hyperg_U(a,b,x): Hypergeometric function $U(a,b,x)$ (more information there)
• k_mhc(lambda, eta): Marcus-Hush-Chidsey integral $k(\lambda, \eta) = \int_{-\infty}^{\infty} \exp\left(\frac{ - (x - \lambda + \eta)^2}{4\lambda}\right) \times \frac{1}{1 + \exp x}\,\mathrm{d}x$. Single precision, computed using the fast trapezoid method. (more information there)
• k_mhc_double(lambda, eta): Marcus-Hush-Chidsey integral $k(\lambda, \eta) = \int_{-\infty}^{\infty} \exp\left(\frac{ - (x - \lambda + \eta)^2}{4\lambda}\right) \times \frac{1}{1 + \exp x}\,\mathrm{d}x$. Double precision, computed using the series by Bieniasz, JEAC 2012. (more information there)
• k_mhc_n(lambda, eta): Approximation to the Marcus-Hush-Chidsey integral described in Nahir, JEAC 2002, $k(\lambda, \eta) \approx \int_{-\infty}^{\infty} \exp\left(\frac{ - (x - \lambda + \eta)^2}{4\lambda}\right) \times \frac{1}{1 + \exp x}\,\mathrm{d}x$ (more information there)
• k_mhc_z(lambda, eta): Approximation to the Marcus-Hush-Chidsey integral described in Zeng et al, JEAC 2014, $k(\lambda, \eta) \approx \int_{-\infty}^{\infty} \exp\left(\frac{ - (x - \lambda + \eta)^2}{4\lambda}\right) \times \frac{1}{1 + \exp x}\,\mathrm{d}x$ (more information there)
• lambert_W(x): Principal branch of the Lambert function $W_0(x)$ (more information there)
• lambert_Wm1(x): Secondary branch of the Lambert function $W_{-1}(x)$ (more information there)
• landau(x): Probability density of the Landau distribution, $p(x) = 1/\pi \int_0^x \mathrm{d}t \exp(-t\log(t) - xt)\sin(\pi t)$ (more information there)
• ln_erfc(x): Logarithm of the complementary error function $\log(\mathop{erfc}(x))$ (more information there)
• ln_gamma(x): The logarithm of the gamma function $\log (\Gamma(x))$ (more information there)
• log(x): $\log x$, works on complex numbers too
• log1p(x): $\ln (1 + x)$, but accurate for $x$ close to 0
• lorentzian(x,gamma): Normalized lorentzian: $p(x,\gamma) = \frac{1}{ \gamma \pi (1 + (x/\gamma)^2) }$
• pseudo_voigt(x, w, mu): Pseudo-Voigt function, defined by: $\frac{1-\mu}{\sqrt{2 \pi w^2}} \exp (-x^2 / 2w^2) + \frac{\mu}{ w \pi (1 + (x/w)^2) }$
• psi(x): Digamma function: $\psi(x) = \Gamma'(x)/\Gamma(x)$ (more information there)
• psi_1(x): Trigamma function: $\psi^{(1)} = \frac{\mathrm d \Gamma'(x)/\Gamma(x)}{\mathrm d x}$ (more information there)
• psi_n(x, n): Polygamma function: $\psi^{(n)} = \frac{\mathrm d^n \Gamma'(x)/\Gamma(x)}{\mathrm d x }$ (more information there)
• trumpet_bv(m, alpha, prec): Position of an oxidative adsorbed 1-electron peak. $m$ is the coefficient defined by Laviron, the value is returned in units of $RT/F$
• weibull(x,a,b): Probability of the Weibull distribution $P_W(x,a,b)$ (more information there)

## Physical constants

Some physical/mathematical constants are available; their name starts with an uppercase letter.

• Alpha: The fine structure constant, $\alpha$ – 0.00729735
• C: The speed of light in vacuum, $c$ – 2.99792e+08
• Eps_0: The permittivity of vacuum, $\epsilon_0$ – 8.85419e-12
• F: Faraday’s constant, $F$ – 96485.3
• H: The Planck constant, $h$ – 6.62607e-34
• Hbar: $\hbar = h/2\pi$ – 1.05457e-34
• Kb: Boltzmann’s constant – 1.38065e-23
• M_e: The mass of the electron, $m_e$ – 9.10938e-31
• M_mu: The mass of the mu, $m_\mu$ – 1.88353e-28
• M_n: The mass of the neutron, $m_n$ – 1.67493e-27
• M_p: The mass of the proton, $m_p$ – 1.67262e-27
• Mu_0: The permeability of vacuum, $\mu_0$ – 1.25664e-06
• Mu_B: The Bohr Magneton, $\mu_B$ – 9.27401e-24
• Na: The Avogadro number, $N_A$ – 6.02214e+23
• Pi, PI: $\pi$ – 3.14159
• Q_e: The absolute value of the charge of the electron, $e$ – 1.60218e-19
• R: Molar gas constant, $R$ – 8.31447
• Ry: The Rydberg constant, $Ry$ – 2.17987e-18
• Sigma: The Stefan-Boltzmann radiation constant – 5.6704e-08

The embedded version of Ruby, mruby, does not have a regular expression engine. We have added one, but it is not based on standard Ruby regular expressions, but on the ones from Qt. For most regular expressions, this should not matter, however.

# Running QSoas

QSoas can also be useful when run from the command-line.

## Command-line options

When starting QSoas from a terminal, you can use a number of command-line option to change its behaviour. Here are the most useful:

• --run command will run the command command after QSoas starts up.
• --exit-after-running will run the commands specified by --run, and then exit the program. This can be used to run scripts to automatically process data without user interaction.
• --no-startup-files disables the loading of startup scripts.
• --stdout makes the text written to the QSoas terminal also appear in the standard output (i.e. the terminal from which you started QSoas).
• --load-stack file loads the given file as a stack file just after QSoas starts up.

## Non-interactive running of QSoas

It is possible to run QSoas completely non-interactively. This can be useful for regenerating the results of fits, or massively subtracting baselines…

The simplest way to do so is to use the scripts/qs-run script included in the source code archive. Copy that script where you have the QSoas command file you want to run, open an operating system command-line terminal and run:

# ./qs-run my-command-script.txt

This file was written by Vincent Fourmond, and is copyright (c) 2012-2020 by CNRS/AMU.