Dynamique réactionnelle des enzymes rédox multicentres, cinétique électrochimique
Reaction dynamics of multicenter redox enzymes, electrochemical kineticsCommande reference
QSoas command reference
Here is the command reference of QSoas, which list all the commands, what they do, and how to use them.
To get a quick introduction at QSoas, you may look at the tutorial, or look at the list of Frequently Asked Questions.
QSoas datasets
The basic unit to manipulate data in QSoas is the dataset (they are
sometimes also called “buffer”, from the terminology used in SOAS). A
dataset is a large table of number. First first column contains the X
values, and the following columns are Y values. A dataset can have
many columns. QSoas plots a dataset by showing the first y
column as
a function of the x
column.
You can use the edit
command to see and edit the contents of
the table.
In addition to the raw numbers, a QSoas dataset contains the following information:
- A name, which is originally the name of the loaded file. It is modified with each command applied to the dataset.
- A series of meta data, which are just named informations. It can be numbers, text, dates, even lists.
- Perpendicular coordinates, one for each Y column. They are used when the dataset can also be seen as a series of , with a different for each Y column.
- A series of flags, which can be used to retrieve them from the stack. Unlike the other attributes, flags are not kept when the dataset is modified.
- Possible names for rows and columns. These can be manipulated using
set-column-names
andset-row-names
; see below.
Different ways to interpret datasets
Datasets are really just a collection of columns with numbers, but it is possible to give them different significations.
- The most common interpretation is just a series of
columns, with several values of :
y
,y2
, and so on. - If perpendicular coordinates are specified, then it
is possible to view these columns as a table of
values, with corresponding to the first column, and a value of
for each “y” column. It is possible to treat data this way
using for instance the
/mode=xyz
ofapply-formula
, or draw contour lines using thecontour
command. - It is possible to just treat the columns as a series of matching
numbers, one per row. In that case, using
column names can greatly help. The command
tweak-columns
is of great help to manipulate columns of a dataset containing several (or even many) columns.
Row and column names
QSoas can store names for columns and rows. As mentioned above, they
can be set using set-column-names
and
set-row-names
.
Column names are used in particular to designate columns, either:
* in formulas (using the $c.column
syntax);
* in column specifications, using either $c.column
or named:column
.
Column names are visible in edit
and show
Row names are only visible in edit
. As of now, their use is
relatively limited, but they become column names upon using
transpose
, and you can save them by specifying
/row-names=true
to the save
command.
QSoas can read the column and row names from a file, generally
seamlessly for column names, but for row names, most of the times, you
have to let it know which columns contain the row names by using the
/text-column
option of load
.
Learning to handle column names is particularly useful when working with exported fit parameters.
Not a number
The table cannot contain text. When QSoas reads a file and is not able
to make a number from what it reads, it uses a special numeric value
called nan
(Not A Number). They can be useful, but they “pollute”
numbers: any operation that involves a nan
will also have nan
as a
result. This means in particular that it is not possible to fit a
dataset that contains a nan
, or determine an average value and so
on…
nan
values are displayed as a big cross joining the two points
between which they are located.
To get rid of points that have either X or Y values that are
nan
, use the following:
QSoas> strip-if x.nan?||y.nan?
Commands, arguments and options (how to read this document)
QSoas works by entering commands inside the command prompt, or alternatively using the menus.
Most commands have arguments and options. Arguments and options are separated by spaces:
QSoas> command argument1 argument2 "argument 3" /option=option /option2="with spaces"
If you need to pass arguments or option values that have spaces, make
sure you quote them using "
or '
, like in the above example. The
=
sign for the options can be replaced by a space, so that the
command above could also have been run thus:
QSoas> command argument1 argument2 "argument 3" /option option /option2 "with spaces"
Arguments are italicized in the documentation below. You
need to provide all the arguments for a command to work, and if you
don’t, QSoas will prompt for them. Some
arguments are followed by …, which means that you can pass several
space-separated arguments. This is the case for load
,
for instance:
QSoas> load file1 file2 file3
The order of the arguments must absolutely be respected. On the other hand, the options can come at any place in the command line. For instance, the two following commands are equivalent:
QSoas> load file.dat /columns=2,3
QSoas> load /columns=2,3 file.dat
Default option
Some options are marked as “(default option)”, which means that, if all
arguments of the command are already specified, you can omit the
/option=
part of the option. For instance, to set the
temperature
to 300 K, you should be doing that:
QSoas> temperature /set=300
But, as /set
is the default option, you can omit the /set=
and
write:
QSoas> temperature 300
In this documentation, all options and arguments have mouseover texts that give a short explanation of what kind of values are expected.
Some commands can be used through a short name (like q
for
quit
), indicated as such in the present documentation.
Some commands are marked as (interactive). This means that their use requires user input. If they are used in a script, the script pauses for user interaction.
Using the menus to discover a command
All the commands that can be run from the command line are also available from within the menus. Running the command through the menu gives a dialog box in which one must choose the arguments of the command, and one can also select the options.
This can be a good way to discover what commands are available, and what they do.
Note about text files
Many commands of QSoas make use of “plain text files”, i.e. files that simply contain unformatted text. These are for instance:
- files for defining fits with
load-fits
- scripts to be run with
run
- definitions of kinetic systems for
fit-kinetic-system
- saved fit parameters
On windows, use Notepad to edit them. On Linux, pico
, nano
, vi
or emacs
are pretty good choices. On MacOS, use TextEdit, but make sure
you hit Cmd+Shift+T to switch to “plain text” format; the default is
rich text (i.e. text with formatting informations) in the RTF format,
and QSoas
does not understand RTF.
“inline” text files
Starting from QSoas version 3.1, it is possible to “define” the
contents of text files directly inside a script file, by using a
special ## INLINE:
file name … ## INLINE END
block. The
text between this two blocks become accessible as a special file
called inline:
file name. Try for
instance running the following script:
## INLINE: data.dat
1 2
2 5
3 9
## INLINE END
load inline:data.dat
This is very useful in particular in combination with
run-for-each
or run-for-datasets
to define “subroutines”
that are maintained in the same file as the main one.
Dataset lists (or buffer lists) arguments
Many commands, such as flag
, contract
and others take
lists of datasets as arguments. This list can take
several forms:
- A comma-separated list of dataset numbers (the ones given by
show-stack
), such as:1,4,7
(0 is the current dataset, 1, the one just before, which you can reach usingundo
, etc.). - Negative numbers refer to the “redo” stack:
-1
is the dataset you would get by runningredo
- A number range, such as
1..7
, meaning all datasets from 1 to 7 included. - A number range with a step, such as
1..7:3
, meaning1,4,7
. all
for all datasets on the stack.displayed
for the currently displayed datasets.latest
for the datasets produced by the last command (running a script counts as many commands); this can be different from0
if the last command produced more than one dataset, or none.latest:1
is the same aslatest
,latest:2
represents the datasets produced by the command before the last one, etc…
It is also possible to make use of dataset flags set by flag
:
flagged
stands for all flagged datasets (regardless of the name of the flag);unflagged
for all datasets that don’t have any flag;flagged-
andunflagged-
do the same, but with the datasets in the reverse order;flagged:
flagname for all datasets that have the flag flagname;unflagged:
flagname for all datasets that don’t have the flag flagname;- and the variants
flagged-:
flagname andunflagged-:
flagname for the reversed order.
Finally, it is also possible to specify datasets by their name, using
the named:
prefix. For instance, named:generated.dat
refers to
all the datasets whose name is generated.dat
.
Note in this documentation, the terms “buffer” and “dataset” are synonyms.
Dataset columns
Some commands such as bin
or dataset-options
take dataset
column names (or numbers) as arguments or options. There are three way
to designate those:
- using a number:
1
is the column,2
is the column, and so on - using a number prefixed by
#
: this is a 0-based index,#0
is then the column - by its name:
x
,y
,z
,y2
,y3
and so on.y2
is equivalent toz
no
ornone
when you don’t want to specify a number at all, such as for disabling the display of error bars withdataset-options
.
Some commands (like contract
) take column lists, which are
comma-separated lists of columns (just like above), with the addition
of ranges: 2..6
are columns 2 to 6 inclusive.
Regular expressions
Some commands, notably load
and the related commands, make use
of “regular expressions”. Regular expressions are a way to describe
how a text looks like, such as “numbers”, “white spaces”, “anything
that looks like a date”, etc. Here is how it works:
- A simple text just matches itself. For instance, using
/separator=,
forload-as-text
means that the columns are separated by commas. {blank-line}
matches a fully blank line.{blank}
matches a series of blanks. This is the default separator forload-as-text
.{text-line}
matches a line that does not start by numbers (ignoring spaces)./
regex/
, which is taken as a Qt regular expression. For instance,/[;,]/
means “either;
or,
”. Please see the Qt documentation for more information.
Commands producing several datasets
Many commands in QSoas will produce several datasets, for instance
load
, that loads several files at the same time, or
split-monotonic
, that splits a dataset into its monotonic
parts. All these commands share a set of options:
/style
that can be used to display all the curves with gradual changes in color, use for instance/style=red-to-blue
or/style=brown-green
(there is automatic completion on this);/flags
, that can be used to set flags to the newly generated datasets, see theflag
command for more information./set-meta
, that can be used to set meta-data to the newly generated datasets, using a key=
value syntax (so you have two=
signs in row). This option can be used several times to add several meta-data;/reversed
, which can be used to reverse the order in which the datasets are pushed to the stack. Useful for instance to get the result ofsim-
commands in the same order as the original datasets.
For instance, try out:
QSoas> generate-dataset -1 1 /style=brown-green sin((10+number)*x) /number=11
QSoas> generate-dataset -1 1 /set-meta=a=2 /set-meta=b=3
General purpose commands
quit
– Quit
quit
Other name: q
Exits QSoas, losing the current session. The full log of the session
is always available in the soas.log
file created in the initial
directory. This is indicated at startup in the terminal.
To avoid accumulating very large log files, the log file gets renamed
as soas.log.1
when you start QSoas (and the older one as
soas.log.2
, and so on until soas.log.5
).
If you want to save the entire state of QSoas before quitting so you
can restart exactly from where you left, use save-stack
.
credits
– Credits
credits
/full=
yes-no
/full=
yes-no: Full text of the licenses – values: a boolean:yes
,on
,true
orno
,off
,false
This command displays credits, copyright and license information of
QSoas and all the dependencies linked to or built in your
version. You’ll get the full license text with /full=true
.
It also lists publications whose findings/equations/algorithms were directly used in QSoas.
version
– Version
version
/dump-sysinfo=
yes-no /show-features=
yes-no
/dump-sysinfo=
yes-no: If true, writes system specific information to standard output – values: a boolean:yes
,on
,true
orno
,off
,false
/show-features=
yes-no: If true, show detailed informations about the capacities of QSoas (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
Prints the version number of QSoas, including various build information.
If the option /show-features=true
, then the output is much longer,
and contains a list of all the features built in QSoas
, including
the fit engines, the available statistics, the
time-dependent parameters and so on.
save-history
– Save history
save-history
file /overwrite=
yes-no
- file: Output file – values: name of a file
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
Saves all the commands that were launched since the beginning of the session, to the given (text) file.
This can be used for saving a series of command that should be applied repetitively as a script.
files-browser
– Browse files
files-browser
(interactive)
This command starts a file browser, which makes it easy to figure out what files are present, which are the meta-data associated to them, and what kind of backend will be used to load them.
The browser makes it very easy to edit the values of the meta-data, as they are displayed each in their own column and are editable. Copy/paste from an external spreadsheet is supported.
cd
– Change directory
cd
directory /from-home=
yes-no /from-script=
yes-no
Other name: G
- directory: New directory – values: name of a directory
/from-home=
yes-no: If on, relative from the home directory – values: a boolean:yes
,on
,true
orno
,off
,false
/from-script=
yes-no: If on, cd relative from the current script directory – values: a boolean:yes
,on
,true
orno
,off
,false
Changes the current working directory. If /from-home
is specified,
the directory is assumed to be relative to the user’s home directory.
If /from-script
is specified, the directory is assumed to be
relative to that of the command file currently being executed by a
run
command (or in a startup script).
pwd
– Working directory
pwd
Prints the full path of the current directory.
It is also indicated in the title of the QSoas window.
head
– Head
head
file /number=
integer /skip=
integer
- file: name of the file to show – values: name of a file
/number=
integer: number of lines to show – values: an integer/skip=
integer: number of lines to skip – values: an integer
This commands prints the first few lines of the given file to the
terminal. This is useful to quickly see the contents of a file, and to
see how QSoas
is able to read it.
The number of lines being printed is chosen using the /number=
option (negative means print everything).
A number of lines can be skipped at the beginning using the /skip=
option.
ls
– List files
ls
(/directory=
)directory
- (
/directory=
)directory (default option): Directory to list – values: name of a directory
ls
lists the files in the current directory, just like the standard
Unix command.
temperature
– Temperature
temperature
(/set=
)number
Other name: T
- (
/set=
)number (default option): Sets the temperature – values: a floating-point number
Shows or sets the current temperature, in Kelvins. The temperature is
used in many places, mostly in fits to serve as the initial value for
the temperature parameter. To set the temperature, pass its new value
using the /set
option (the /set=
part is optional):
QSoas> temperature 310
commands
– Commands
commands
List all available commands, with a short help text. This also includes used-defined commands, such as custom fits loaded from a fit file and aliases.
help
– Help on…
help
(/command=
)command /dump=
yes-no /location=
text /synopsis=
yes-no
Other name: ?
- (
/command=
)command (default option): The command on which to give help – values: the name of one of QSoas’s commands /dump=
yes-no: Shows information about the contents of the help files – values: a boolean:yes
,on
,true
orno
,off
,false
/location=
text: Shows the given URL location in the documentation – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/synopsis=
yes-no: Does not show the help, but print a brief synopsis – values: a boolean:yes
,on
,true
orno
,off
,false
Gives all help available on the given command. It shows the inline (HTML) documentation.
If you have doubts whether the documentation is up-to-date, you can
use the /synopsis=true
option to have a brief text description of
the command together with its arguments and options. By construction,
this small text is always up-to-date.
If you don’t know what the /location
option does, you don’t need it.
tips
– Tips
tips
/show-at-startup=
yes-no
/show-at-startup=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
Without any options, it shows the “startup tips” window. With the
/show-at-startup
option, you can control whether the tips will show
at startup in the next run of QSoas or not.
save-output
– Save output
save-output
file /overwrite=
yes-no
- file: Output file – values: name of a file
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
Save all text in the terminal to a plain text file. Equivalent to copy-pasting the contents of the terminal to a plain text file using a text editor.
print
– Print
print
(/file=
)file /nominal-height=
integer /overwrite=
yes-no /page-size=
text /title=
text
Other name: p
- (
/file=
)file (default option): Save as file – values: name of a file /nominal-height=
integer: Correspondance of the height of the page in terms of points – values: an integer/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
/page-size=
text: Sets the page size, like 9×6 for 9cm by 6cm – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/title=
text: Sets the title of the page as printed – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Prints the current view, providing a usual print dialog. If you just
want a PDF or PostScript file, just provide the file name as the
/file
option.
An optional title can be added using the /title
option.
You can also use a .svg
extension if you want to produce a SVG file
that can later be modified, by, e.g. Inkscape.
Important note: QSoas is not a data plotting system, it is a data analysis program. Don’t expect miraculous plots !
define-alias
– Define alias
define-alias
alias command /*=
text
- alias: The name to give to the new alias – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- command: The command to give an alias for – values: the name of one of QSoas’s commands
/*=
text: All options – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
The define-alias
commands allows one to defined a shortcut for a
command one uses often with the same options. For instance, running:
QSoas> define-alias fit-2exp fit-exponential-decay /exponentials=2 /loss=true
creates a fit-2exp
command that is equivalent to starting
fit-exponential-decay
with two exponentials by default
and film loss on.
Alias can only be used to provide default values for options. It cannot provide default values for arguments.
display-aliases
– Display aliases
display-aliases
Shows a list of all the currently defined aliases.
graphics-settings
– Graphics settings
graphics-settings
/antialias=
yes-no /line-width=
number /opengl=
yes-no
/antialias=
yes-no: Turns on/off the use of antialised graphics – values: a boolean:yes
,on
,true
orno
,off
,false
/line-width=
number: Sets the base line width for all lines/curves – values: a floating-point number/opengl=
yes-no: Turns on/off the use of OpenGL acceleration – values: a boolean:yes
,on
,true
orno
,off
,false
Gives the possibility to tweak a few settings concerning display. The settings are kept from one QSoas session to the next.
Turning on antialias (with /antialias=true
) will make QSoas use
antialiased drawings, which looks admittedly nicer, but requires much
more computation time, to the point that drawing jagged curves may
become particularly slow. Printing or exporting to PDF files through
print
always produces antialiased graphics, regardless of this
option.
If you experience performance problems for displaying curves, use
/opengl=true
, as this will instruct QSoas to use hardware
acceleration to display curves. It is off by default as some setups do
not really benefit from that, and the OpenGL support is sometimes
buggy and may result in crashes.
ruby-run
– Ruby load
ruby-run
file
- file: Ruby file to load – values: name of a file
This command loads and executes a Ruby file. For the time being, the main interest of this command is to define complex functions in a separate file.
Imagine you have a file function.rb
containing the text:
def mm(x,vmax,km)
return vmax/(1 + km/x)
end
After running
QSoas> ruby-run function.rb
You can use mm
like any normal function for fitting:
QSoas> fit-arb mm(x,vmax,km)
or use it in eval
:
QSoas> eval mm(1.0,2.0,3.0)
=> 0.5
You can find out more about ruby code below, but here is how
one can define a function my_exp
that is 0 before t0
and follows
an exponential relaxation starting at val
with a time constant tau
afterwards:
def my_exp(t,t0,tau,val)
if t < t0
return 0
else
return val*exp(-(t-t0)/tau)
end
end
break
– Break
break
Exits from the current script. Has no effect if not inside a script.
debug
– Debug
debug
(/directory=
)directory /level=
integer
- (
/directory=
)directory (default option): Directory in which the debug output is saved – values: name of a directory /level=
integer: Sets the debug level – values: an integer
With this command, it is possible to collect a large amount of debugging information. You will essentially only need this to send information to the QSoas developers to help them track down problems.
The command:
QSoas> debug directory
sets up the automatic debug output in the directory directory
.
The /level
option correspond to the debug level. It defaults to 1
,
the higher this number the more detailed the output will be.
system
– System
system
command… /shell=
yes-no /timeout=
integer
- command…: Arguments of the command – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /shell=
yes-no: use shell (on by default on Linux/Mac, off in windows) – values: a boolean:yes
,on
,true
orno
,off
,false
/timeout=
integer: timeout (in milliseconds) – values: an integer
The system
command can be used to run external commands from
QSoas. The output of the commands will be displayed in the terminal.
For the duration of the external command, QSoas will not respond to keyboard and mouse.
If /use-shell
is on (the default on Linux and Mac, but off in
Windows), the command will be processed by the shell before being run.
If a strictly positive /timeout
is specified, the command will be
killed if it takes longer than the timeout to execute.
timer
– Timer
timer
/name=
text
/name=
text: name for the timer – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
The first call starts a timer, and the second one stops it, showing
the amount of time that has elapsed since the previous call to
timer
. This can be used to benchmark costly computations, for
instance.
mem
– Memory
mem
/cached-files=
integer
/cached-files=
integer: – values: an integer
Displays information about the resource use of QSoas, including memory
use, the number of cached files and the total CPU time used so
far. The size of the file cache can be changed using the
/cached-files
option.
Output file manipulation
Several commands (e.g. various data analysis commands and the fit commands) write data to the output file.
By default, the first time the output file is used, a output.dat
file is created in the current directory. Another file can be used by
providing its name to the output
command.
output
– Change output file
output
(/file=
)file /meta=
words /overwrite=
yes-no /reopen=
yes-no
- (
/file=
)file (default option): name of the new output file – values: name of a file /meta=
words: when writing to output file, also prints the listed meta-data – values: several words, separated by ‘,’/overwrite=
yes-no: if on, overwrites the file instead of appending (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/reopen=
yes-no: if on, forces reopening the file (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
This command has several modes of operations. If file is provided
(it is the default option, so you can omit /file=
), then it opens
file as the new output file. By default, if the file exists, new
data are appended, and the old data are left untouched. You can force
overwriting by specifiying /overwrite=true
.
In the other mode, when only the /meta
option is provided, it sets
the list of meta-data that will automatically be added to the output
file when outputting any data there. It is a comma-separated list of
meta names. See more about meta-data there.
It is a bad idea to modify the output file while QSoas is still using it, as it messes up what QSoas think is in the output file. If you forgot you were using the output file and modified it, you can avoid problems by running:
QSoas> output /reopen=true
comment
– Write line to output
comment
comment
- comment: Comment line added to output file – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Writes the given line comment to the output file. Don’t forget to quote if you need to include spaces:
QSoas> comment 'Switching to sample 2'
Data loading/saving
The main command for loading data is load
.
load
– Load
load
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-cache=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
Other name: l
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-cache=
yes-no: if on, ignores cache (default off) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
Loads the given files and pushes them onto the data stack. QSoas
features several backends for loading files (“backends” are roughly
equivalent to “file formats”). In principle, QSoas is smart enough to
figure out which one is correct, but you can force the use of a given
backend by using the appropriate load-as-
command. Using a backend
directly also provides more control on the way files are loaded (this
can also be done via the numerous options to load
, which are
forwarded to the appropriate backend). Currently available backends:
text
for plain space-separated textcsv
for CSV datachi-txt
for file from CH Instruments potentiostatseclab-ascii
for ASCII files exported from Biologic potentiostatsparameters
for fit parameters (“saved for reusing later”)
Look in their documentation for more information. In particular, the
options /separator=
, /decimal=
, /skip=
, /comments=
,
/columns=
and /auto-split
are documented in the
load-as-text
command.
QSoas tells you which backend it used for loading a given file:
QSoas> load 03.dat
Loading file: './03.dat' using backend text
The command load
caches the loaded file. If for some reason, the
cache gets in the way, use the direct load-as-
commands, or
alternatively use /ignore-cache=true
.
load
, like all the other commands that take several files as
arguments, understand unix-like wildcards:
QSoas> load *.dat
This command loads all the files ending by .dat
files from the
current directory.
QSoas> load [0-4]*.dat
This loads only those that start with a digit from 0 to 4, etc.
One can also set various dataset options while loading with load
(and the load-as-
commands), using the options /yerrors=
and
/histogram=
. See the dataset-options
, command for more
information
The /style=
option sets the color style when loading several curves:
QSoas> load *.dat /style=red-blue
This loads all the .dat
files in the current directory, and displays
them with a color gradient from red (for the first loaded file) to
blue (for the last loaded file).
With the /flags=
option, on can flag datasets as they get
loaded. Using it has the same effect as running flag
with the
same option on loaded datasets.
The load
command also provides dataset selection rules through the
/for-which
, option, more about that in the
dedicated section.
By default, the load
and related commands will not create a dataset
if it were empty (i.e. a valid data file containing no data), you can
force the creation of empty files using /ignore-empty=false
.
Finally, it is possible to provide a number of datasets that should be
loaded with the /expected=
option. The command fails if the number
of loaded datasets does not match the number given. This can be useful
for scripts, to abort the script when a file is missing, see
run
to make use of this.
load-as-text
– Load files with backend ‘text’
load-as-text
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
Loads files using the backend text
, bypassing cache and automatic
backend detection. text
recognizes space-separated data (which
includes tab-separated data). Most “plain text” files will be read
correctly by this backend. By default, it loads all the columns of the
file, but only displays the second as a function of the first. If you
want to work on other columns, have a look at expand
.
Alternatively, you can specify the columns to load using the
/columns
option, see below.
Apart from the options of dataset-options
and the /style
and
/flags
options documented in the load
command, the text
backend accepts several options controlling the way the text files are
interpreted:
/separator
specifies the text that separates the columns (blank spaces by default). You can use regular expressions./decimal
specifies the decimal separator for loading (default is the dot). This is for loading only./comments
specifies a regular expression describing comment lines (ie lines that get ignored). By default, line that don’t start by a number are ignored.- Give to
/skip
a number of text lines that should be ignored at the beginning of the text file. - If
/auto-split
istrue
, thenQSoas
will create a new dataset everytime it hits a series of blank lines in the file. /columns
is a series of numbers saying in which order the file columns will be used to make a dataset. For instance,/columns=2,1
will swap X and Y at load time./text-columns
designates columns in the file that will be interpreted as “text”, that is, row names. 1 is the first column.
load-as-csv
– Load files with backend ‘csv’
load-as-csv
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
The csv
backend is essentially the same backend as the
text
one, but with the separators set by
default to commas and semicolons, to parse CSV files.
Hence, the options have the same meaning as for load-as-text
.
load-as-chi-txt
– Load files with backend ‘chi-txt’
load-as-chi-txt
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
This is a slightly modified version of load-as-text
that
handles better text files from CH Instruments (and is in particular
able to detect at least some of their meta-data).
load-as-eclab-ascii
– Load files with backend ‘eclab-ascii’
load-as-eclab-ascii
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
This is a slightly modified version of load-as-text
that
handles better ASCII files exported from Biologic potentiostats.
load-as-parameters
– Load files with backend ‘parameters’
load-as-parameters
file… /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style /yerrors=
column
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
QSoas can also load the parameters from a “Save Parameters” file. The
parameterse end up one per column, as a function of the perpendicular
coordinate used during the fit (or just an index if there was no
perpendicular coordinates). This works on the parameters “saved for
reusing later”, the ones “exported” can be read using the standard
load-as-text
command, possibly by specifying the option
/comments=#
to avoid ignoring lines that start with text (dataset names).
expand
– Expand
expand
/expand-meta=
meta-data /flags=
flags /group-columns=
integer /perp-meta=
text /reversed=
yes-no /set-meta=
meta-data /style=
style /x-columns=
integer /x-every-nth=
integer
/expand-meta=
meta-data: Expand all the given meta-data, one value per produced dataset – values: comma-separated list of meta-data that will be expanded into individual datasets, see there/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/group-columns=
integer: specifies the number of Y columns in the created datasets – values: an integer/perp-meta=
text: defines meta-data from perpendicular coordinate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/x-columns=
integer: specifies the number X columns – values: an integer/x-every-nth=
integer: specifies the number of columns between successive X values – values: an integer
If a dataset contains several columns, QSoas only displays the second as a
function of the first. expand
splits the current dataset into as
many datasets as there are Y columns, ie a X, Y1, Y2, Y3 dataset will be
split into three datasets: X, Y1; X, Y2 and X, Y3.
If /perp-meta
is specified, then the given meta-data
will be defined for each dataset, based on the value of the
perpendicular coordinates.
By default, expand
assumes that the first column is the only X
column. However, if you give a number to the /x-every-nth=
option,
then expand
assumes that a X column is every that many columns. For
instance, /x-every-nth=2
means that the layout of the dataset is X1
Y1 X2 Y2 X3 Y3…
By default, expand
splits every Y column into its own
dataset. However, it is possible to group them using the
/group-columns
option. For instance, splitting a X Y1 Y2 Y3 Y4
dataset with:
QSoas> expand /group-columns=2
will result in two datasets: X Y1 Y2 and X Y3 Y4.
The option /x-columns
has a similar effect, but for the X
columns. It gives the number of columns at the beginning of the
dataset that will be considered as X columns. For instance, if you
split a X1 X2 Y1 Y2 Y3 dataset with the command:
QSoas> expand /x-columns=2
You will get three datasets, X1 X2 Y1, X1 X2 Y2 and X1 X2 Y3.
The option /expand-meta
will expand the meta-data whose name is
listed. It requires the meta are lists whose size is exactly the number
of datasets to be created. See also here.
rename
– Rename
rename
new-name
Other name: a
- new-name: New name – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Changes the name of the current dataset. To help track the operations
applied to a dataset, its name is modified and gets longer after each
modification. Use rename
to give it a more meaningful (and shorter)
name.
If you need to rename a large number of datasets, you probably want to
try save-datasets
with /mode=rename
.
save
– Save
save
file /comments=
text /mkpath=
yes-no /number-format=
text /overwrite=
yes-no /row-names=
yes-no /separator=
text
Other name: s
- file: File name for saving – values: name of a file
/comments=
text: prefix for the comments – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/mkpath=
yes-no: If true, creates all necessary directories – values: a boolean:yes
,on
,true
orno
,off
,false
/number-format=
text: printf-like format string for numbers – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
/row-names=
yes-no: Wether to write row names or not – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
text: column separator (default: tab) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Saves the current dataset to a file. This command will ask you before
overwriting an existing file, unless /overwrite=true
was specified.
The name of the current dataset will be changed to match the name of the file.
The following options control the output format:
* /separator
specifies what separates the column of numbers
(defaults to a tabulation).
* /row-names
specifies if the names of the rows are written out in
the first column; it is off by default.
* /number-format
to fine-tune the way the numbers are written.
If you use /row-names=true
, you should reload the saved file using
QSoas> load-as-text /text-columns=1 file.dat
The /number-format=
option can be used to specify a “sprintf-like” format
for writing the numbers. See Ruby’s
sprintf
for more information. For instance, if you want to produce text files
that could be included into a LaTeX document using siunitx
, you
could use:
QSoas> save table.tex /separator=& /number-format=\num{%g}
Be warned that QSoas is most probably not going to be able to detect automatically the format you use for saving if you use custom separators and/or formats.
save-datasets
– Save
save-datasets
datasets… /comments=
text /expression=
text /format=
text /mkpath=
yes-no /mode=
choice /number-format=
text /overwrite=
yes-no /row-names=
yes-no /separator=
text
Other name: save-buffers
- datasets…: datasets to save – values: comma-separated lists of datasets in the stack, see dataset lists
/comments=
text: prefix for the comments – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expression=
text: a Ruby expression to make file names – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/format=
text: overrides dataset names if present – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/mkpath=
yes-no: if true, creates all necessary directories (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/mode=
choice: if using/format
or/expression
, whether to justsave
, to justrename
orboth
(defaults to ‘both’) – values: one of:both
,rename
,save
/number-format=
text: printf-like format string for numbers – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/overwrite=
yes-no: if false, will not overwrite existing files (warning: default is true) – values: a boolean:yes
,on
,true
orno
,off
,false
/row-names=
yes-no: Wether to write row names or not – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
text: column separator (default: tab) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Saves the designated datasets to files.
Unlike the save
command, this saves the datasets using
their current names, and does not prompt for a file name. It is
probably a good idea to use rename
first, or use the
possibilities below.
This command can rename the datasets before saving them,
by using a
printf
-like
format, as in the following case,
which renames the first 101 datasets to Buffer-000.dat
,
Buffer-001.dat
, and so on:
QSoas> save-datasets /format=Buffer-%03d.dat 0..100
It is also possible to use a full-blown Ruby expression (returning a string) that will be aware of the dataset’s meta-data:
QSoas> save-datasets '/expression="File-#{$meta.sr}.dat"'
This requires careful quoting: outer single quotes ('
) for QSoas and
inner double quotes for Ruby (you could also do the other way
around). See more information about the
informations available from within the ruby code there.
If you only need to rename the datasets without saving them, use
/mode=rename
.
By default, save-datasets
overwrites the files without asking, but
using /overwrite=false
keeps the original files in place.
save-datasets
does not by default create directories. However, using
/mkpath=true
makes it possible to save datasets in non-existing
directories, that as created when needed. Try out:
QSoas> save-datasets /format=non-existing-directory/buffer-%03d.dat 0..100 /mkpath=true
browse
– Browse files
browse
(/pattern=
)text /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /for-which=
code /separator=
pattern /skip=
integer /text-columns=
integers
Other name: W
- (
/pattern=
)text (default option): Files to browse – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/for-which=
code: Select on formula – values: a piece of Ruby code/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/skip=
integer: skip that many lines at beginning – values: an integer/text-columns=
integers: text columns – values: a comma-separated list of integers
Browse all datafiles in the current directory (or those matching the
wildcard pattern given to /pattern
, see load
for more
information about wildcards). Very useful to find quickly the file
you’re looking for.
Using the /for-which
option, one can display only a certain set of
files based on their meta-data and/or statistics. See the
dedicated section for more details.
This command also takes all the fine-tuning option for loading files
available to the load
command.
Data display
overlay-buffer
– Overlay buffers
overlay-buffer
(/buffers=
)datasets /for-which=
code /style=
style
Other name: V
- (
/buffers=
)datasets (default option): Buffers to overlay – values: comma-separated lists of datasets in the stack, see dataset lists /for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/style=
style: Style for curves display – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Plots one or several datasets on top of the current dataset.
See load
for the description of the /style
option.
hide-buffer
– Hide buffers
hide-buffer
buffers…
Other name: H
- buffers…: buffers to hide – values: comma-separated lists of datasets in the stack, see dataset lists
This does the reverse of the overlay-buffer
command. Pass it
the datasets you want to remove from the current view. Don’t be afraid
of passing it non-visible datasets, QSoas won’t shout at you if you do.
overlay
– Overlay
overlay
file… /auto-split=
yes-no /columns=
integers /comments=
pattern /decimal=
text /expected=
integer /flags=
flags /for-which=
code /histogram=
yes-no /ignore-cache=
yes-no /ignore-empty=
yes-no /reversed=
yes-no /separator=
pattern /set-meta=
meta-data /skip=
integer /style=
style /text-columns=
integers /yerrors=
column
Other name: v
- file…: the files to load – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /auto-split=
yes-no: if on, create a new dataset at every fully blank line (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
integers: columns loaded from the file – values: a comma-separated list of integers/comments=
pattern: pattern for comment lines – values: plain text, or regular expressions enclosed within / / delimiters/decimal=
text: decimal separator – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/expected=
integer: Expected number of loaded datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Select on formula – values: a piece of Ruby code/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-cache=
yes-no: if on, ignores cache (default off) – values: a boolean:yes
,on
,true
orno
,off
,false
/ignore-empty=
yes-no: if on, skips empty files (default on) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/separator=
pattern: separator between columns – values: plain text, or regular expressions enclosed within / / delimiters/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/skip=
integer: skip that many lines at beginning – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/text-columns=
integers: text columns – values: a comma-separated list of integers/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
This command combines overlay-buffer
and
load
in one go: loads the files given as arguments and
adds them to the current plot; it has the same options as those commands.
clear
– Clear view
clear
Removes all datasets except the current dataset from the display. Use to revert the effect of a previous overlay command, or can be useful if for some reason a command failed while not restoring the display (but that should not happen anyway).
points
– Show points
points
Other name: poi
Shows datapoints (by default, datasets are plotted by connecting datapoints with a line). Beware that it may slow down display if you have a large number of data points.
zoom
– Zoom
zoom
(interactive)
Other name: z
Zooms on the current curve.
Click to delimit a region. Hit x
to zoom in on the X axis, X
to
zoom out, y
and Y
for the Y axis, and z
/Z
for both at the same
time. Hit c
to reset the zoom.
Indepently of this function, you can use the mouse wheel at any moment to zoom in and out:
- mouse wheel: zoom in and out vertically
- Shift+mouse wheel: zoom in and out horizontally
- Ctrl (or Cmd) + mouse wheel: zoom in and out (horizontally and vertically)
- Shift+Ctrl + mouse wheel: reset zoom.
If you know the coordinates around which you’d like to zoom, you may
want to use limits
instead.
limits
– Set limits
limits
left right bottom top
- left: Left limit – values: a floating-point number
- right: Right limit – values: a floating-point number
- bottom: Bottom limit – values: a floating-point number
- top: Top limit – values: a floating-point number
This is the non-interactive version of zoom
. You specify the
left, right, bottom and top values of the currently displayed window
directly on the command-line. There are two special values:
*
means “auto”, or in other words the maximum needed to see all the curves for that specific side (left, right, bottom or top)=
means “don’t change”
The limits also work in reverse, try out:
QSoas> generate-dataset 0 10 x**2
QSoas> limits 10 0 * *
Data stack manipulation
Data files are loaded and manipulated in a stack. Every time a file is
loaded or a dataset modified, the new dataset is pushed onto the top of
the stack, and becomes the current dataset (numbered 0). Older datasets
have increasing numbers (the previous dataset is 1, the one before 2,
and so on). There is also a “redo” stack populated by the
undo
command. Stack can be manipulated in different
ways:
- the current dataset can be changed using
undo
/redo
; - datasets can be permanently removed from the stack using
drop
; - the whole stack can be saved for later use with
save-stack
and restored usingload-stack
, or dropped altogether usingclear-stack
; - contents of the stack can be displayed in the terminal using
show-stack
or in a dialog bog withbrowse-stack
. - an old dataset can be put back on the top of the stack with
fetch
. - datasets can be flagged (
flag
) or unflagged (unflag
) to be used later using theflagged
dataset selector.
browse-stack
– Browse stack
browse-stack
(/buffers=
)datasets /for-which=
code (interactive)
Other name: K
- (
/buffers=
)datasets (default option): Datasets to show – values: comma-separated lists of datasets in the stack, see dataset lists /for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
Displays the contents of the stack using a dialog box that
works similarly to the one of the browse
command.
It is possible to fine-tune the datasets to browse using:
* the /buffers
option, which takes a dataset list;
* the /for-which
option, that takes a condition, see the
dedicated section for more information.
If the option /meta=
is specified, the command also lists the values
of the given, comma-separated, meta data.
show-stack
– Show stack
show-stack
(/number=
)integer /meta=
words
Other name: k
- (
/number=
)integer (default option): Display only that many datasets around 0 – values: an integer /meta=
words: also lists the comma-separated meta-data – values: several words, separated by ‘,’
Shows a small text summary of what the stack is made of. If your stack
is large and you just need to look at a few datasets, use /number=10
for instance (that will only show datasets -9
to 9
).
undo
– Undo
undo
(/number=
)integer
Other name: u
- (
/number=
)integer (default option): Number of operations to undo – values: an integer
Returns to the previous dataset, and pushes the current to the redo
stack. If /number=
is specified, repeats that many times.
redo
– Redo
redo
(/number=
)integer
Other name: r
- (
/number=
)integer (default option): Number of operations to redo – values: an integer
Pops the last dataset from the redo stack and sets it as the current
dataset. /number
has the same meaning as for undo
.
save-stack
– Save stack
save-stack
file /overwrite=
yes-no /rotate=
integer
- file: File name for saving stack – values: name of a file
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
/rotate=
integer: if not zero, performs a file rotation before writing – values: an integer
Saves the entire contents of the stack (all the datasets, their flags
and their meta-data) for later use in a .qst
file, which is in a
binary format. This file is only meant to be loaded again with
either the command load-stack
, directly from the
command-line using the --load-stack
command-line
option, or directly by double-clicking
from your favorite file manager.
If you’d rather save every file in the stack separately as a text
file, use the save-datasets
command:
QSoas> save-datasets all
Stack file format QSoas uses a simple binary format for saving the stack. It stores all the datasets of the stack, including their meta-data, perpendicular coordinates and flags. It does not save:
- the currently displayed datasets (the datasets are saved but not the information that they are displayed;
- user-defined Ruby functions/variables;
- user-defined fits.
load-stack
– Load stack
load-stack
file /merge=
yes-no
- file: File name for saving stack – values: name of a file
/merge=
yes-no: If true, merges into the current stack rather than overwriting – values: a boolean:yes
,on
,true
orno
,off
,false
Loads a saved stack, from a file that was created using
save-stack
.
If /merge=true
is used, then the previous datasets are kept, and the
contents of the stack files are just merged into the stack.
clear-stack
– Clear stack
clear-stack
Other name: delstack
Removes all the datasets from both normal and redo stack
fetch
– Fetch datasets from the stack
fetch
datasets… /flags=
flags /for-which=
code /reversed=
yes-no /set-meta=
meta-data /style=
style
- datasets…: Datasets to fetch – values: comma-separated lists of datasets in the stack, see dataset lists
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Put back a copy of the given dataset on the top of the stack. Useful when you want to work again on a old dataset buried in the stack.
It is possible to fine-tune the datasets you pick using the
/for-which
option.
drop
– Drop dataset
drop
(/buffers=
)datasets /for-which=
code
- (
/buffers=
)datasets (default option): Datasets to permanently remove – values: comma-separated lists of datasets in the stack, see dataset lists /for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
Permanently deletes the current dataset (or the ones specified in the
/buffers
options) from the stack.
QSoas> drop 3..16
drops all the datasets from 3 to 16 included.
Important: it is not possible to recover a dataset once it has been
dropped from the stack. undo
won’t work.
It is possible to fine-tune the datasets you pick using the
/for-which
option.
flag
– Flag datasets
flag
(/buffers=
)datasets /exclusive=
yes-no /flags=
words /for-which=
code /set=
yes-no
- (
/buffers=
)datasets (default option): Buffers to flag/unflag – values: comma-separated lists of datasets in the stack, see dataset lists /exclusive=
yes-no: If on, clears the given flags on all the datasets but the ones specified – values: a boolean:yes
,on
,true
orno
,off
,false
/flags=
words: Flags to set/unset – values: several words, separated by ‘,’/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/set=
yes-no: If on, clears all the previous flags – values: a boolean:yes
,on
,true
orno
,off
,false
Flags the given dataset (or the current one if none is supplied) for
later use. All currently flagged datasets can be specified using the
flagged
argument to, for instance, overlay-buffer
.
QSoas
supports arbitrary text flags, by passing a comma-separated
list of flags to the /flags=
option. In the absence of that, the
datasets are flagged with the flag name default
. Datasets can hold
an arbitrary number of flags. For instance:
QSoas> flag 0..5 /flags=exp1,fit
flags datasets 0 to 5 with the flags exp1
and fit
. Datasets are
flagged ‘in-place’: the current dataset is not changed.
If the /for-which
option is present, the flags are only applied to
the datasets that match the specifications given. See more about that
there.
By default, flag
does not touch already existing flags. However, if
you use /exclusive=true
, then all the flags that are not set
explictly with the command are cleared.
unflag
– Unflag datasets
unflag
(/buffers=
)datasets /flags=
words /for-which=
code
- (
/buffers=
)datasets (default option): Buffers to flag/unflag – values: comma-separated lists of datasets in the stack, see dataset lists /flags=
words: Flags to set/unset – values: several words, separated by ‘,’/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
Does the reverse of flag
, that is removes all flags on the
given datasets, or only those specified by the /flags
option if the
latter is present. The /for-which
option words exactly in the same
way as for flag
.
auto-flag
– Auto flag
auto-flag
(/flags=
)words
- (
/flags=
)words (default option): Flags – values: several words, separated by ‘,’
Flags the datasets produced by all commands afterwards, until a call
to auto-flag
without options:
QSoas> auto-flag /flags=stuff
[ ... create new datasets. They will all be flagged stuff,
until the following command ...]
QSoas> auto-flag
This can be used to flag all the datasets produced by a script, for instance.
sort-datasets
– Sort datasets
sort-datasets
datasets… key /reversed=
yes-no /use-stats=
yes-no
- datasets…: Datasets to sort – values: comma-separated lists of datasets in the stack, see dataset lists
- key: Sorting key (a ruby expression) – values: a piece of Ruby code
/reversed=
yes-no: Sorts in the reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/use-stats=
yes-no: Use statistics in the expressions – values: a boolean:yes
,on
,true
orno
,off
,false
sort-datasets
reorders the given datasets inside the stack using
the Ruby formula given as the last argument. It does not touch the
other datasets from the stack. For instance, to sort the first 10
datasets alphabetically according to their dataset name, it is
possible to use:
QSoas> sort-datasets 0..9 $meta.name
It is possible to sort in the reverse order using /reversed=true
. By
default, the statistics are not available, but you can use
/use-stats=true
to make them available under the variable $stats
(as usual).
Important this command modifies directly the stack, it is not
possible to undo it, unless you took care of saving the stack before
using save-stack
.
Basic data manipulation at the dataset level
apply-formula
– Apply formula
apply-formula
formula (/buffers=
)datasets /extra-columns=
integer /flags=
flags /for-which=
code /keep-on-error=
yes-no /mode=
choice /name=
text /reversed=
yes-no /set-meta=
meta-data /style=
style /use-meta=
yes-no /use-names=
yes-no /use-stats=
yes-no
Other name: F
- formula: formula – values: a piece of Ruby code
- (
/buffers=
)datasets (default option): Datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists /extra-columns=
integer: number of extra columns to create – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/keep-on-error=
yes-no: if on, the points where the Ruby expression returns a error are kept, as invalid numbers – values: a boolean:yes
,on
,true
orno
,off
,false
/mode=
choice: operating mode used by apply-formula – values: one of:add-column
,xyy2
,xyz
/name=
text: name of the new column (only in ‘add-column’ mode) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-meta=
yes-no: if on (by default), you can use$meta
to refer to the dataset meta-data – values: a boolean:yes
,on
,true
orno
,off
,false
/use-names=
yes-no: if on the columns will not be called x,y, and so on, but will take their name based on the column names – values: a boolean:yes
,on
,true
orno
,off
,false
/use-stats=
yes-no: if on (by default), you can use$stats
to refer to statistics (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Applies a formula to the current dataset. It should specify how the
x
and/or y
values of the dataset are modified:
QSoas> apply-formula x=x**2
QSoas> apply-formula y=sin(x**2)
QSoas> apply-formula x,y=y,x
The last bit swaps the and values of the dataset. The formula must be valid ruby code.
In addition to x
and y
(note the lowercase !), the formula can
refer to:
i
, the index of the data pointseg
, the number of the current segment (starting from 0)x_0
, the value of of the first point of the current segmenti_0
, the index of the first point in the current segmenty2
,y3
, etc when there are more than 2 columns in the dataset$c.name
refers the value of the column namedname
, see there
It is possible to modify all of these variables, but only the
modifications in x
, y
, y2
and so on are taken in to account. In
particular, the $c.name
cannot be used to modify the value of the
column name
(but see below).
Here is how you can use i
to have even points draw a sine wave and
odd points a cosine:
QSoas> apply-formula y=(i%2==0?sin(x):cos(x))
%
is the modulo operator. The construction with ?
and :
(called
the ternary operator means:
if i%2==0
is true, then the value is sin(x)
, else cos(x)
.
You can use several instructions by separating them with ;
:
QSoas> apply-formula x=x**2;y=x**2
This results in x
values that are the squares of the old values, and
y
values that are the square of the new x
values.
Extra columns initially filled with 0 can be created by using the
/extra-columns
option:
QSoas> apply-formula /extra-columns=1 y2=y**2
This creates a third column (a second y
column) containing the
square of the values of the Y column.
If /use-stats=true
is used, a global variable $stats
can be used
in the Ruby expression. It contains all the statistics displayed
by stats
. For instance, to normalize the Y values by dividing
by the median, one would use:
QSoas> apply-formula /use-stats=true y=y/$stats.y_med
Note that you can make use of the special /=
operator to shorten
that into:
QSoas> apply-formula /use-stats=true y/=$stats.y_med
Statistics by segments (see more about segments there) are available too, which means if you want to normalize by the medians of the first segment, you could do
QSoas> apply-formula /use-stats=true y/=$stats[0].y_med
If /use-meta
is true
(the default), then a global variable $meta
is defined that contains the value of the meta-data
(what is shown by show
). What you make of this will greatly
depend of the meta-data QSoas has gathered from your file (and those
you have set manually using set-meta
).
Some results will give “invalid numbers”, such as sqrt(-1)
. By
default, QSoas
strips the points corresponding to the invalid
results, but you can keep them (as invalid numbers) using
/keep-on-error=true
(but be aware that working with invalid numbers
is a real pain !).
It is now possible to work with several datasets using the /buffers
option, and control the resulting datasets using the commands described
there.
If the Ruby code uses the Ruby keyword break
, then the processing of
the dataset ends at that moment, keeping all the data points that have
been calculated so far.
Using column and row names
It is possible to use column and row names:
- The syntax
$c.name
refers to the value of the columnname
. $row_name
is the name of the current value of the row name. It can be used to modify the row names of the dataset.- It is possible to set the value of a named column directly. This
requires using the
/use-names=true
option which replaces all the standardx
,y
,y2
names by their real names. Note: this will only work if column names are unique and correct Ruby names. The following command modifies the column names to ensure this is the case:
QSoas> set-column-names /sanitize-names=true
Other modes
apply-formula
offers two other modes in addition to what is
described above, in which all columns have to be modified using either
x
, y
, or their real names.
With /mode=add-column
, the value of the expression is used to create
a single new column. The other columns are not modified. You can
specify the name of the new column using the /name=
option.
For instance, the following commands adds a new column name product
that contains the product of the columns a
and b
:
QSoas> apply-formula /mode=add-column $c.a*$c.b /name=product
This is very useful to create a named column in datasets where the number of columns is not known (but their names are).
With /mode=xyz
, the whole data is considered as a single
table. x
is the usual value, and z
corresponds to
the perpendicular coordinates. This mode modifies all
but the first column. There is no need to specify y=
in the
formula.
dx
– DX
dx
Replaces the Y values by the values of delta X, i.e,
y[i] = x[i+1] - x[i]
. This is useful to see if the X values are
equally spaced.
dy
– DY
dy
Same as dx
but for Y values: replaces the Y values by the
values of delta Y.
zero
– Makes 0
zero
value /axis=
axis
- value: – values: a floating-point number
/axis=
axis: which axis is zero-ed (default y) – values: one of:x
,y
Given an X value, shifts the Y values so that the point the closest to the given X value has 0 as Y value.
If /axis
is x
, swap X and Y in the above description.
shiftx
– Shift X values
shiftx
Shift X values so that the first point has a X value of 0.
norm
– Normalize
norm
(/map-to=
)numbers /positive=
yes-no
- (
/map-to=
)numbers (default option): Normalizes by mapping to the given segment – values: several floating-point numbers, separated by : /positive=
yes-no: whether to normalize on positive or negative values (default true) – values: a boolean:yes
,on
,true
orno
,off
,false
Normalizes the current dataset by dividing by its maximum value, or, if
/positive=false
by the absolute value of its most negative value.
If the /map-to
option is specified, the original dataset is mapped
linearly to the given interval:
norm /map-to=2:4
shifts and scales the original data so that the Y minimum is 2 and the Y maximum is 4.
deldp
– Deldp
deldp
(interactive)
With this command, you can click on given data points to remove
them. Useful to remove a few spikes from the data. Middle click or q
to accept the modifications, hit escape to cancel them.
edit
– Edit dataset
edit
Opens a spreadsheet-like window where you can view and edit the individual values of the current dataset. If you want to save your modification, press the “push new” button.
sort
– Sort
sort
(/buffers=
)datasets /column=
column /flags=
flags /for-which=
code /reverse=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style
- (
/buffers=
)datasets (default option): Datasets to sort – values: comma-separated lists of datasets in the stack, see dataset lists /column=
column: – values: the number/name of a column in a dataset/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/reverse=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Sorts the dataset in increasing X values. It can work on several
datasets specified by the /buffers=
option, and in that case
produces several datasets. The behaviour is controlled by the various
options, see there for more information.
In addition, it is possible to control the behaviour of sort
using
the following options:
/column
specified a column on which the sorting should be done (defaults to the X column);- if
/reversed
is true, then the dataset will be sorting in descending order.
reverse
– Reverse
reverse
Reverses the order of all the data points: the last one now becomes
the first one, and so on. Though this has no effect on the look of the
data, this will impact commands that work with indices, such as
cut
and the multi-dataset processing commands (such as
subtract
, div
) with /mode=indices
.
rotate
– rotates the lines of the dataset
rotate
delta
- delta: offset of the rotation – values: an integer
This command “rotates” the dataset: delta points are taken from the end of dataset and put back at the beginning (in the same order). The overall number of points does not change. A negative delta will take points from the beginning to put them at the end.
strip-if
– Strip points
strip-if
formula (/buffers=
)datasets /flags=
flags /for-which=
code /reversed=
yes-no /set-meta=
meta-data /style=
style /threshold=
integer /use-meta=
yes-no /use-stats=
yes-no
- formula: Ruby boolean expression – values: a piece of Ruby code
- (
/buffers=
)datasets (default option): Datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists /flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/threshold=
integer: If the stripping operation leaves less than that many points, do not create a new dataset – values: an integer/use-meta=
yes-no: if on (by default), you can use$meta
to refer to the dataset meta-data – values: a boolean:yes
,on
,true
orno
,off
,false
/use-stats=
yes-no: if on, you can use$stats
to refer to statistics (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Removes all points for which the ruby expression returns true
. This
can be used for quite advanced data selection:
QSoas> strip-if x>4
This removes all points whose X value is greater than 4.
QSoas> strip-if x>4||x<2
This removes all points whose X value is greater than 4 or whose X
value is lower than 2. The ||
bit means OR. In other terms, it
keeps only the X values between 2 and 4.
QSoas> strip-if x*y<10&&x>2
This removes all the points for which both the X value is greater than 2 and the product of X and Y is lower than 10.
When reading data files that contain spurious data points (such as
text lines containing no data within a file read with
load-as-text
), QSoas replaces the missing data
by weird numbers called NaN (Not a Number). They can be useful at
times, but mess up statistics and fits. To remove them, use:
QSoas> strip-if x.nan?||y.nan?
Like in apply-formula
, you can use the statistics and the
meta-data of the datasets if you use the options /use-meta
(on by
default) and /use-stats
, or also the column names using $c.name
.
By default, strip-if
creates a new dataset regardless of the number
of points left (even if there are no points left). Giving a value to
the /threshold
option will prevent strip-if
from creating a new
dataset if it has less than that many points.
Like the other commands that can produce several datasets in one go,
strip-if
has a number of options to control how
the datasets are produced.
integrate
– Integrate
integrate
/index=
integer
/index=
integer: index of the point that should be used as y = 0 – values: an integer
Integrate just does the reverse of diff
and integrates the
current dataset. First data point is the one for which Y=0, unless an
index is specified to the /index
option, in which case the numbered
point ends up being at 0.
diff
– Derive
diff
/derivative=
integer /order=
integer
/derivative=
integer: the number of the derivative to take, only valid together with the order option – values: an integer/order=
integer: total order of the computation – values: an integer
Computes the 4th order accurate derivative of the dataset.
This is efficient to compute the derivative of smooth data, but it
gives very poor results on noisy data. In general, for derivation of
real data, prefer filter-fft
, filter-bsplines
or
auto-reglin
, which will give much better results.
Starting from QSoas version 2.1, a second mode is available, in which
you can choose an arbitrary order for the derivation (has to be less
than the number of points of the dataset), via the option /order=
,
and an optional derivative via the /derivative
option. For instance,
you can reproduce the effect of diff2
using:
QSoas> diff /order=4 /derivative=2
diff2
– Derive twice
diff2
Computes the 4th order accurate second derivative of the dataset.
The same warnings apply as for diff
.
dataset-options
– Options
dataset-options
/histogram=
yes-no /yerrors=
column
/histogram=
yes-no: whether to show as a histogram (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/yerrors=
column: name of the column containing y errors – values: the number/name of a column in a dataset, or ‘none’ to mean ‘no column’
Sets options for the current dataset:
/yerrors
sets the display of errors on Y values, see there for more information on how to specify the columns;/histogram
sets wether or not the dataset should be displayed as a histogram.
edit-errors
– Edit errors
edit-errors
(interactive)
Provides an interface for editing manually the errors attached to each point of the current dataset. This function will create a column containing errors if there is none yet.
Pick left and right bounds with the left and right mouse buttons and
set the errors within the bounds with i
and outside with o
. This
is typically used to crudely exclude some bits of the dataset from
fitting, by setting much larger errors for the bits than for the rest.
set-row-names
– Set row names
set-row-names
(/names=
)words /clear=
yes-no
- (
/names=
)words (default option): Names of the columns – values: several words, separated by ‘’ /clear=
yes-no: Removes all the names – values: a boolean:yes
,on
,true
orno
,off
,false
Sets names of the rows.
The names can either be a simple list, or a series of specifications
like #10:name
, #-4:name
or #1..5:name
, which sets the row name to
name
to respectively the 11th row (indices are 0-based), to the 4th
starting from the end, or to all between the second and the sixth row
(included).
set-column-names
– Set column names
set-column-names
(/names=
)words /clear=
yes-no /columns=
columns /sanitize-names=
yes-no
- (
/names=
)words (default option): Names of the columns – values: several words, separated by ‘’ /clear=
yes-no: Removes all the names – values: a boolean:yes
,on
,true
orno
,off
,false
/columns=
columns: Sets the names of these columns only – values: a comma-separated list of columns names/sanitize-names=
yes-no: Adapts the names so that they can be used with apply-formula /use-names=true – values: a boolean:yes
,on
,true
orno
,off
,false
Sets the column names to the lsit of names given. By default, the
names given apply in order (and the other ones are left untouched),
but you can choose which column(s) to apply to using the /columns=
option.
For instance, this sets only the name of the 5th column (corresponding
to y4
):
QSoas> set-column-names new_y4 /column=y4
/clear=yes
clears all the column names, so they are back to the
default values (x
, y
, y2
and so on).
/sanitize-names=true
will make the column names suitable for use for
apply-formula
with /use-names=true
Splitting the dataset in bits (and back)
cut
– Cut
cut
(interactive)
Other name: c
Interactively cuts bits out of the dataset. Left and right mouse clicks
set the left and right limits. Middle click or q
quits leaving only the
part that is within the region, while u
leaves only the outer
part. r
remove the part inside the region, but lets you keep on
editing the dataset. Hit escape to cancel.
By default, the Y values are displayed as a function of the index; you
can switch back to display Y values as a function of X by hitting x
.
chop
– Chop dataset
chop
(/lengths=
)numbers /flags=
flags /from-meta=
text /mode=
choice /reversed=
yes-no /set-meta=
meta-data /set-segments=
yes-no /style=
style
- (
/lengths=
)numbers (default option): Lengths of the subsets – values: several floating-point numbers, separated by , /flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/from-meta=
text: – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/mode=
choice: Whether to cut on index or x values (default) – values: one of:deltax
,index
,indices
,xvalues
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/set-segments=
yes-no: Whether to actually cut the dataset, or just to set segments where the cuts would have been – values: a boolean:yes
,on
,true
orno
,off
,false
/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Cuts the dataset into several parts based on the numbers given as
arguments, and save them as separate
datasets. The intepretation of the numbers depends on the value of the
/mode
option:
deltax
(default): the numbers are the length (in terms of X) of the sub-datasetsxvalues
: the numbers are the X values at which to splitindex
(orindices
): the numbers are the indices of the points at which to split
If /set-segments
is on, the X values are not used to create independent
datasets but rather to set the position of the segments.
If the option /from-meta
is used, it designates a meta-data
containing a list of values. In that case, the values given on the
command-line are ignored, and the values contained in the meta are
used instead.
splita
– Split first
splita
Returns the first part of the dataset, until the first change of sign of .
Useful to get the forward scan of a cyclic voltammogram.
splitb
– Split second
splitb
Returns the part of the dataset after the first change of sign of .
Useful to get the backward scan of a cyclic voltammogram.
split-monotonic
– Split into monotonic parts
split-monotonic
/flags=
flags /group=
integer /keep-first=
integer /keep-last=
integer /reversed=
yes-no /set-meta=
meta-data /style=
style
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/group=
integer: Group that many segments into one dataset – values: an integer/keep-first=
integer: Keep only the first n elements of the results – values: an integer/keep-last=
integer: Keep only the last n elements of the results – values: an integer/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Splits the dataset into datasets where all parts have X values that increase or decrease monotonically.
With /group=2
, each resulting dataset will contain two monotonic
segments.
Using the /keep-first
or /keep-last
options make it possible to
only keep a given number of the generated datasets.
unwrap
– Unwrap
unwrap
/reverse=
yes-no /scan-rate=
number
/reverse=
yes-no: If true, reverses the effect of a previous unwrap command – values: a boolean:yes
,on
,true
orno
,off
,false
/scan-rate=
number: Sets the scan rate – values: a floating-point number
This command makes the X values of the current dataset monotonic by ensuring that the value of always have the same sign, changing it if needed. The command places segments limits at the position of the changes in direction.
This is useful for instance to convert a cyclic voltammogram from to ; for that purpose, the scan rate can be
provided using the /scan-rate=
option, or can be guessed from the
sr
meta-data.
The unwrap
operation can be reverted by calling unwrap
with
/reverse=true
, which will use the scan rate information and the
position of the segments to reconstruct the original data.
cat
– Concatenate
cat
buffers… /add-segments=
yes-no /contract-meta=
meta-data
Other name: i
- buffers…: Datasets to concatenate – values: comma-separated lists of datasets in the stack, see dataset lists
/add-segments=
yes-no: If on (default) segments are added between the old datasets – values: a boolean:yes
,on
,true
orno
,off
,false
/contract-meta=
meta-data: Contracts all the named meta data meta-data lists – values: comma-separated list of meta-data to group into lists, see there
Concatenates the datasets given as arguments, adding segment stops
inbetween (unless /add-segments=false
is used). This can be used to
reverse the effect of the previous commands.
This does not change the number of columns. If you want to gather
several Y columns as a function of the same X, use
contract
instead.
If the option /contract-meta
is used, then the meta-data whose names
are given to that option will be gathered from all the original
datasets and transformed into a meta-data list. See
there for more information.
Dataset’s meta-data and perpendicular coordinates
QSoas’ datasets (or buffers) hold more than just columns of
numbers. When a file is loaded, QSoas also gathers as much information
as possible about that file, such as original file name, file date,
and, for file formats supported by QSoas, details about the
experimental conditions recorded in that file. These are known as
“meta-data”, and can be displayed using the show
command.
Here are some meta-data of particular signification available to all datasets loaded from files:
file_date
is the date of the fileoriginal_file
is the file name of the loaded fileage
is the how old the file was in seconds when the currentQSoas
session was started.commands
is the list of commands that have been applied to this dataset since its load/creation.
Upon saving using save
all meta-data are saved as
comments in the text file.
Perpendicular coordinates make sense when a dataset has several Y
columns. For instance, when the dataset consists in spectra taken at
different times, like in the
tutorial (or at different
solution potentials for a redox titration), then the X values will be
the wavelength, and each Y column will correspond to a different
time. Then the time is the perpendicular coordinate. One can set
the perpendicular coordinate manually using set-perp
.
Many commands use perpendicular coordinates, most notably
transpose
(that would convert columns of for
different values of above into columns of for
different values of ), and all the multi-fit commands, which
show parameters as a function of the perpendicular coordinates when
applicable.
Some of the meta-data has special meaning for QSoas
, which uses them
for specific functions:
sr
is taken to be the scan rate of a voltammogram. This information is used bybaseline
andfit-adsorbed
.
Meta-data can be of several types, like text or number, but also
lists. See for instance the /type=number-list
option of
set-meta
.
Selecting datasets and files based on meta-data
Some commands, namely flag
, unflag
and browse
accept a /for-which
option to select the datasets (or files) they
work on based on their properties. The value of the /for-which
is a
ruby formula that uses the global variables $meta
and
$stats
variables. For instance, the following command flags all the
datasets that have a maximum value greater than 1e-4
:
QSoas> flag all /for-which $stats.y_max>=1e-4
How to test for equality: in ruby, you need to use ==
to test whether two values are the same. For instance, to flag
voltammograms in which the scan rate is 0.1 V/s, you have to use:
QSoas> flag all /for-which $meta.sr==0.1
Replacing the ==
by =
in the code above leads to selecting all
the datasets, because $meta.sr=0.1
is always true
(see more about
the ruby expressions there).
Meta-data expansion/contraction
Some commands like contract
gather several datasets into a
single one, or on the contrary, like expand
create many
datasets from a single one.
By default, the meta-data are either all copied from the source (when creating several datasets), or taken from one of the datasets (when making one from several). However, in some cases, you may want to contract all the values of a meta-data from several datasets into a single meta-data containing a list of the original meta-data, or, conversely, expanding the list by taking one value for each of the dataset produced.
This can be achieved using the relevant /expand-meta
or
/contract-meta
option which takes a list of the names of the
meta-data you want to expand/contract.
show
– Show information
show
datasets…
- datasets…: Datasets to show – values: comma-separated lists of datasets in the stack, see dataset lists
This command gives detailed information about the datasets given as arguments, such as the number of rows, columns, segments, but also the flags the dataset may have, and all their meta-data:
QSoas> show 0
Dataset 08.oxw: 2 cols, 4975 rows, 1 segments
Flags:
Meta-data: delta_t_0 = 950 gpes_file = D:\Vincent\140428\08 original-file = /home/vincent/Data/140428/08.oxw
age = 428907.581 steps = 1 title =
file-date = 2014-05-23T21:23:38 exp-time = 14:03:08 comments =
t_0 = 0 E_0 = -0.65 method = chronoamperometry
set-meta
– Set meta-data
set-meta
name value /also-record=
yes-no /type=
choice
- name: name of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- value: value of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/also-record=
yes-no: also record the meta-data as if one had used record-meta on the original file – values: a boolean:yes
,on
,true
orno
,off
,false
/type=
choice: type of the meta-data – values: one of:number
,number-list
,text
Using set-meta
, one can set the value of the named meta-data for the
current dataset. name can have any value, it does not have to
exist in the list of dataset’s meta-data.
The actual type of the meta-data can be specified using the /type
option. For now, it is mostly useful to specify lists of numbers:
QSoas> set-meta injection-times 100,200,300 /type=number-list
This specifies that the meta-data injection times is a list of numbers (and not a text).
Meta-data are not permament, and will be forgotten from a QSoas
session to another. To store permanently the meta-data so that it is
set again the next time QSoas
loads this file, either use the
record-meta
, or use /also-record=true
, which has the same
effect as running record-meta
on the original file.
record-meta
– Set meta-data
record-meta
name value files… /exclude=
files /remove=
yes-no /type=
choice
- name: name of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- value: value of the meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- files…: files on which to set the meta-data – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /exclude=
files: exclude files – values: one or more files. Can include wildcards such as *,[0-4]
, etc…/remove=
yes-no: remove the meta rather than adding it – values: a boolean:yes
,on
,true
orno
,off
,false
/type=
choice: type of the meta-data – values: one of:number
,number-list
,text
record-meta
is the “permanent” version of set-meta
. It sets
meta-data permanently for a series of files (and not datasets as in
the case of set-meta
). For instance, after running
QSoas> record-meta pH 7 experiment.dat another.dat
The next time QSoas
loads either experiment.dat
or another.dat
,
they will automatically have a meta-data called pH
with a value 7
.
Behind the scenes The meta-data are stored in special files, one for each of
the data files. They are almost plain text files (more precisely, JSON
files). The have the names of the original files with a .qsm
suffix
appended. If you move data files around, you need to also move these
files if you want the meta-data to follow.
If you use /remove=true
, then the meta-data is removed instead of
being added. Important note: you still must provide a value, which
will not be used. This way, to remove the meta data added by the
previous command, you could use:
QSoas> record-meta /remove=true pH whatever experiment.dat another.dat
save-meta
– Save meta-data back to file
save-meta
(/file=
)file
- (
/file=
)file (default option): save for this file – values: name of a file
This command saves the meta-data of the current dataset, either to the
“original file”, that is the file the current dataset is derived from,
or to the file given as the /file
option.
This command does not modify the actual data, just the .qsm
file
containing the meta-data.
set-perp
– Set perpendicular
set-perp
(/coords=
)numbers /from-row=
integer
- (
/coords=
)numbers (default option): The values of the coordinates (one for each Y column) – values: several floating-point numbers, separated by , /from-row=
integer: Sets the values from the given row (and delete it) – values: an integer
Sets the perpendicular coordinates for the Y columns, as comma-separated values. There must be as many perpendicular coordinates as there are Y columns.
Another possibility is to specify a row using /from-row
. In that
case, the perpendicular coordinates are taken from the values of the
row (the first element, corresponding to the x value, is ignored), and
the row is deleted. This is useful when the text data contains the
perpendicular coordinates as a “text header”.
transpose
– Transpose
transpose
This command transposes the matrix of the Y columns, while paying
attention to the perpendicular coordinates. In short, if one starts
from a series of Y columns representing spectra as a function of
(the X column) for different values of time (each column
at at different value of ), then after transpose
, the new
dataset contains columns describing the time evolution of the
absorbance for different values of (one for each column).
tweak-columns
– Tweak columns
tweak-columns
/flip=
yes-no /flip-all=
yes-no /remove=
columns /select=
columns
/flip=
yes-no: If true, flips all the Y columns – values: a boolean:yes
,on
,true
orno
,off
,false
/flip-all=
yes-no: If true, flips all the columns, including the X column – values: a boolean:yes
,on
,true
orno
,off
,false
/remove=
columns: the columns to remove – values: a comma-separated list of columns names/select=
columns: select the columns to keep – values: a comma-separated list of columns names
tweak-columns
provides means to remove and select columns.
If a list of columns is given to the /remove
option, then the given columns are removed. If /flip
is on, then all
Y columns are reversed. If /flip-all
is on, then all columns,
including the X column, are reversed.
If a list of columns is given to the /select
option, then the newly created dataset will be composed only of the
columns specified, in the order they are specified. The columns can be
used more than once.
split-on-values
– Split on column values
split-on-values
meta… columns /flags=
flags /reversed=
yes-no /set-meta=
meta-data /style=
style
- meta…: Names of the meta to be created – values: several words, separated by ‘,’
- columns: Columns whose values one should split on – values: a comma-separated list of columns names
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
This command splits the current dataset into a number of datasets, based on the contents of the columns columns. Each newly created dataset correspond to points in the original dataset that had exactly the same values in the designated columns. These columns are remove from the newly created datasets and the values are used to set the meta-data meta. There must be as many comma-separated names in meta as there are colunm names in columns.
Data filtering/processing
QSoas provides different ways to process data to remove unwanted noise:
- Fourier transform filtering using
filter-fft
orauto-filter-fft
. - Data approximation using basis splines via
filter-bsplines
orauto-filter-bs
. - Other kinds of filters using
kernel-filter
. - Splike removals using
remove-spikes
or evendeldp
.
In addition, QSoas provides ways to remove calculated “baselines”:
- baselines interpolated from datapoints with
baseline
- baselines interpolated between two segments using either a cubic
function or an exponential function with
catalytic-baseline
filter-fft
– FFT filter
filter-fft
/derive=
integer (interactive)
/derive=
integer: The starting order of derivation – values: an integer
Filters data using FFT, ie the data is Fourier transformed, then a filter function is applied in the frequency domain and the result is backward transformed.
The cutoff can be changed using the mouse left/right buttons. The
power spectrum can be displayed using the p
key, and the derivative
can be displayed with d
(in which case you get the derivative of the
signal when accepting the data).
Behind the scenes, a cubic baseline is computed and subtracted from
the data to ensure that the data to which the FFT is applied has 0
value and 0 derivative on both sides. This greatly reduces artifacts
at the extremities of the dataset. This baseline is computed using a
small heuristic. You can display it using the b
key.
If you want to do that non-interactively, look at auto-filter-fft
.
filter-bsplines
– B-Splines filter
filter-bsplines
/weight-column=
column (interactive)
/weight-column=
column: Use the weights in the given column – values: the number/name of a column in a dataset
Filters the data using B-splines: B-splines are polynomial functions of a given order defined over segments. The filtering process finds the linear combination of these spline functions that is the closest to the original data.
This approach amounts to taking the projection of the original data onto the subspace of the polynomial functions.
More information about the polynomial splines used can be found in the GSL documentation.
The result can be tuned by placing “nodes”, ie the X positions of the
segments over which the splines are defined. Put more nodes in an area
where the data is not described properly by the smoothed
function. Increasing the order (using +
) may help too.
Like for filter-fft
, you can derive the data as
well pushing the d
key.
Hitting the o
key optimizes the position of the segments in order
to minimize the difference between the data and the
approximation. (be careful as this function may fail at times).
If you want to do that non-interactively, look at auto-filter-bs
.
auto-filter-bs
– Auto B-splines
auto-filter-bs
(/buffers=
)datasets /derivatives=
integer /flags=
flags /for-which=
code /number=
integer /optimize=
integer /order=
integer /reversed=
yes-no /set-meta=
meta-data /style=
style /weight-column=
column
Other name: afbs
- (
/buffers=
)datasets (default option): Datasets to filter – values: comma-separated lists of datasets in the stack, see dataset lists /derivatives=
integer: computes derivatives up to this number – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/number=
integer: number of segments – values: an integer/optimize=
integer: number of iterations to optimize the position of the nodes (defaults to 15, set to 0 or less to disable) – values: an integer/order=
integer: order of the splines – values: an integer/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/weight-column=
column: uses the weights in the given column – values: the number/name of a column in a dataset
Filters the data using B-splines in a non-interactive fashion.
Performs automatically an optimization step, like hitting o
in
filter-bsplines
, with a number of iterations that is
configurable using the /optimize=
option (0 disables that altogether).
This is mostly useful in scripts.
auto-filter-fft
– Auto FFT
auto-filter-fft
(/buffers=
)datasets /cutoff=
integer /derive=
integer /flags=
flags /for-which=
code /reversed=
yes-no /set-meta=
meta-data /style=
style /transform=
yes-no
Other name: afft
- (
/buffers=
)datasets (default option): Datasets to filter – values: comma-separated lists of datasets in the stack, see dataset lists /cutoff=
integer: value of the cutoff – values: an integer/derive=
integer: differentiate to the given order – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/transform=
yes-no: if on, pushes the transform (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Filters data using FFT in a non-interactive fashion. Useful in scripts.
With /transform=yes
, pushes the Fourier transform of the data, in
the format:
freq magnitude real imag
auto-reglin
– Automatic linear regression
auto-reglin
/filter=
yes-no /window=
integer
/filter=
yes-no: If true (not the default), filter the data instead of computing the slope – values: a boolean:yes
,on
,true
orno
,off
,false
/window=
integer: Number of points (after and before) over which to perform regression – values: an integer
Performs a linear regression on a number of points around each point
of the graph and creates a dataset from the resulting slopes, which
results in a derivative dataset. This command is similar to but
provides less noisy output than diff
, and also similar to
filtering with FFT (using filter-fft
) and taking the
derivative.
The option /window
specifies the number of points on either side of
each point used for linear regression (defaults is 7, so the linear
regression is made over 15 points in total).
With /filter=true
, the linear regression is used to predict values
of the points, which acts as a filter of the data, just like
filter-fft
or filter-bsplines
.
kernel-filter
– Kernel filter
kernel-filter
/alpha=
number /size=
integer /threshold=
number /type=
choice
/alpha=
number: Gaussian spread (only for gaussian) – values: a floating-point number/size=
integer: Half window size – values: an integer/threshold=
number: Threshold for impulse filters – values: a floating-point number/type=
choice: Kernel type – values: one of:gaussian
,impulse-iqr
,impulse-mad
,impulse-qn
,impulse-sn
,median
,rmedian
This command filters the data using different filters that have in
command to work on a small number of points at the same time (given as
argument to the /size
argument, which corresponds to the
half-width).
The filters available are:
* gaussian
, a gaussian kernel (see
there),
whose spread can be parametrized using the /alpha
option;
* median
and rmedian
are median and recursive median filters (see
there);
* impulse-iqr
, impulse-mad
, impulse-qn
and impulse-sn
are
various types of impulsion detection filters (see
there),
parametrized using the /threshold=
option.
remove-spikes
– Remove spikes
remove-spikes
/factor=
number /force-new=
yes-no /number=
integer
Other name: R
/factor=
number: threshold factor – values: a floating-point number/force-new=
yes-no: creates a new dataset even if no spikes were removed (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/number=
integer: looks at that many points – values: an integer
Removes spikes using a simple heuristic: a point is considered a
“spike” if over the /number
points, the difference between this
point and the ones next to it are larger than /factor
times the
other differences in the interval. This command will not create a new
dataset if not spikes were removed, unless you specify
/force-new=true
, in which case the dataset is duplicated; this is
useful for scripting, when you need a reproducible number of created
datasets, regardless of whether spikes are present or not.
downsample
– Downsample
downsample
/factor=
integer
/factor=
integer: Downsampling factor – values: an integer
Creates a dataset with about factor times less points than the original dataset (default 10 times less) by averaging the original X and Y values in groups of factor. This command averages the other columns too.
baseline
– Baseline
baseline
(interactive)
Other name: b
Draw a baseline by placing markers on the curve using the mouse (or
off the curve, after using key o
). Baseline is computing using one
of several interpolation algorithms: C-splines, linear or polynomial
interpolation and Akima splines (the latter usually follows best the
accidents on the curve). Cycle between the various schemes by hitting
t
.
It is possible to leave saving not the interpolated data, but just the
interpolation “nodes” (ie the big dots), by pushing the p
key. This has two advantages: first, one can load nodes from a dataset
by hitting the L
key and providing the dataset number (or just their
X value by hitting l
). Second, if one has the nodes and just the X
values, one can generate the interpolated data using interpolate
.
The area between the baseline and the curve is displayed in the
terminal. If the dataset has a meta-data named sr
, it is taken as a
scan rate (as in cyclic voltammetry), and the charge is displayed too.
interpolate
– Interpolate
interpolate
xvalues nodes /type=
choice
- xvalues: Dataset serving as base for X values – values: a dataset in the stack. Can be designated by its number or by a flag (if it’s unique)
- nodes: Dataset containing the nodes X/Y values – values: a dataset in the stack. Can be designated by its number or by a flag (if it’s unique)
/type=
choice: Interpolation type – values: one of:akima
,linear
,polynomial
,spline
Given a dataset containing xvalues and another one containing the X/Y
position of interpolation nodes saved using p
from within
baseline
, this command regenerates the interpolated values, for the
given X values.
Through this approach, one can draw a baseline, save the points,
generate the baseline-subtracted data using interpolate
from
within a script. This has the advantage that one can always have a
close look at the quality of the baseline, and tweak it if need be.
catalytic-baseline
– Catalytic baseline
catalytic-baseline
(interactive)
Other name: B
Draws a so-called “catalytic” baseline. There are several types of baselines, but they all share the following features:
- they are defined by 4 points
- the first two points correspond to points where the baseline sticks to the data
- the last two points give a “direction”
There are two baselines implemented for now:
- a cubic baseline, that goes through the first two points and is parallel to the slope of the last two
- an exponential baseline, that goes through the first two points and has the same ratio as the data for the last two points
solve
– Solves an equation
solve
formula /iterations=
integer /max=
text /min=
text /prec-absolute=
number /prec-relative=
number
- formula: An expression of the y variable – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/iterations=
integer: Maximum number of iterations before giving up – values: an integer/max=
text: An expression giving the upper boundary for dichotomy approaches – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/min=
text: An expression giving the lower boundary for dichotomy approaches – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number
Solves an equation on on the current dataset. For instance,
QSoas> solve y**2-x
solves for the equation .
By default, the algorithm used is an iterative process starting from
the current value of (i.e. the value before the command
starts). You can use a dichotomy approch by specifying upper and lower
bounds using the /min=
and /max=
options:
QSoas> solve y**2-x /min=0 /max=x
auto-correlation
– Auto-correlation
auto-correlation
Other name: ac
Computes the auto-correlation function of the data, using FFT.
bin
– Bin
bin
/boxes=
integer /column=
column /log=
yes-no /max=
number /min=
number /norm=
yes-no /weight=
column
/boxes=
integer: – values: an integer/column=
column: – values: the number/name of a column in a dataset/log=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/max=
number: Maximum value of the histogram, overrides the maximum of the values in the data – values: a floating-point number/min=
number: Minimum value of the histogram, overrides the minimum of the values in the data – values: a floating-point number/norm=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/weight=
column: – values: the number/name of a column in a dataset
Creates an histogram by binning the Y values (or the values of the
column given by the /column
option, see above)
into various boxes (whose number can be controlled using the /boxes
option). The new dataset has for X values the center of the boxes and
as Y values the number of data points that were in the boxes.
By default, all original points have a weight of 1. You can specify a
column number using the /weight=
option that contains the weight of
each point.
The range of values used is automatically deduced from the data, but
you can use the /min=
and /max=
options to set it manually.
add-noise
– Add noise
add-noise
sigma /distribution=
choice /seed=
integer
- sigma: ‘Amplitude’ of the noise – values: a floating-point number
/distribution=
choice: The noise distribution – values: one of:cauchy
,gaussian
,uniform
/seed=
integer: The generator seed. If not specified or negative, uses the current time – values: an integer
This command adds random noise following the distribution given as the
/distribution
option (default is uniform noise) with the given
“amplitude” (the scale parameter of the distributions).
It is possible to obtain reproducible results by using a given /seed
parameter.
linear-least-squares
– Linear least squares
linear-least-squares
formula (/buffers=
)datasets /accumulate=
meta-data /for-which=
code /meta=
meta-data /output=
yes-no /set-meta=
meta-data /use-meta=
yes-no /use-names=
yes-no /use-stats=
yes-no
- formula: formula – values: a piece of Ruby code
- (
/buffers=
)datasets (default option): Buffers to work on – values: comma-separated lists of datasets in the stack, see dataset lists /accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/use-meta=
yes-no: if on (by default), you can use$meta
to refer to the dataset meta-data – values: a boolean:yes
,on
,true
orno
,off
,false
/use-names=
yes-no: if on the columns will not be called x,y, and so on, but will take their name based on the column names – values: a boolean:yes
,on
,true
orno
,off
,false
/use-stats=
yes-no: if on (by default), you can use$stats
to refer to statistics (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Linear least squares runs a linear least squares minimization of the
given formula to the current dataset (or to the ones specified by
/buffers
and /for-which
). As the linear least squares problem has
a single analytical solution, there is no need for a fit interface
like for the fit-
commands, which are tuned for non-linear problems.
The formula is a function of x
which contains arbitrary parameters
(which do not start with an uppercase).
Try for instance:
QSoas> generate-dataset 0 1 x**2+2*x+3
QSoas> add-noise 0.1
QSoas> linear-least-squares a*x**2+b*x+c
The results of the operation are the values of the parameters, which can be sent to the output file, to meta-data or to the accumulator, see there for more details.
Important warning QSoas does not try to check that the dependency of the formula on the parameters is truly linear. If that is not the case, you will simply get nonsensical answers.
contour
– Contours
contour
levels… /flags=
flags /reversed=
yes-no /set-meta=
meta-data /style=
style
- levels…: levels at which to contour – values: several floating-point numbers, separated by ,
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
This command assumes that the dataset can be interpreted in the form of data (see there, which means that the perpendicular coordinate has been correctly setup.
The command compute the contours for the listed values of , and creates a new dataset for each contour found. There can be more than one contour for a single value of a level.
For instance, try out:
QSoas> generate-dataset -2 2 /columns=400 /samples=400
QSoas> transpose
QSoas> apply-formula x=4*i/399.0-2
QSoas> apply-formula /mode=xyz r=(x**2+z**2)**0.5;sin(PI*r)/r
QSoas> contour 0
Segments
It is possible to split a dataset into logical segments without changing
the contents of the dataset. The position of the segment
boundaries are marked by a vertical line. They can be used for
different purposes: for
segment-by-segment operations,
step-by-step film loss correction
(using film-loss
) or dataset splitting
(using segments-chop
).
Segments can be detected using
find-steps
, or set manually using
set-segments
or chop
.
It is possible to remove the segments from a dataset by using clear-segments
.
find-steps
– Find steps
find-steps
/average=
integer /set-segments=
yes-no /threshold=
number
/average=
integer: Average over that many points – values: an integer/set-segments=
yes-no: Whether or not to set the dataset segments – values: a boolean:yes
,on
,true
orno
,off
,false
/threshold=
number: Detection threshold – values: a floating-point number
This function detects “jumps” in the data (such as potential changes in a chronoamperometry experiment, for instance), and display them both to the terminal output and on the data display.
By default, this function only shows the segments it finds, but if the
option /set-segments
is on, the segments are set to that found by
find-steps
(removing the ones previously there).
set-segments
– Set segments
set-segments
(interactive)
Interactively prompts for the addition/removal of segments. A left click adds a segment where the mouse is, while a right click removes the closest segment.
segments-chop
– Chop into segments
segments-chop
/expand-meta=
words /flags=
flags /reversed=
yes-no /set-meta=
meta-data /style=
style
/expand-meta=
words: Expand all the given meta-data, one value per produced dataset – values: several words, separated by ‘’/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Cuts the dataset into several ones based on the segments defined in the
current dataset. This way, the effect of a chop
/set-segment=true
followed by segments-chop
is the same as the
chop
without /set-segment=true
.
If the option /expand-meta
is used, the corresponding meta-data
lists are split into the individually created datasets, see
here for more information.
clear-segments
– Clear segments
clear-segments
Removes all the segments from the current dataset.
film-loss
– Film loss
film-loss
(interactive)
Applies stepwise film loss correction (in the spirit of the
experiments in Fourmond et al, Anal. Chem.,
2009). For that, the current
dataset must be separated into segments, using
set-segments
, for instance. QSoas
then zooms on
the first segment. Right and left clicking around the final linear
decay will set the value of the film loss rate constant for this
step. Push space to switch to the next step, and when you have done
everything, push q
to obtain the corrected data.
Operations involving several datasets
It is possible to combine several datasets into one by applying
mathematical operations (subtraction, division and the like). Each of
these processes involve matching a data point of a dataset to a data point
of another one. There are several ways to do that, chosen by the
/mode
option:
- with
/mode=xvalues
, the default, uses the values of X (ie the closest X value is picked). This mode will not allow values of X too far from either end of the dataset to be matched. Warning this will not work properly for datasets with several times the same X values, like cyclic voltammograms. /mode=extend
is the same as/mode=xvalues
, but it allows arbitrary extension, so that in effect, the first and last values of the dataset are repeated ad infinitum. This used to be the default behaviour, but it can cause confusing mistakes sometimes.- With
/mode=strict
, the X values have to match exactly. If no matching x value is found, then aNaN
value is used. Values in the second dataset corresponding to X values not in the first are simply ignored. - with
/mode=indices
, points are matched on a one-to-one basis, ie point 1 of dataset 1 to point 1 of dataset 2, irrespective of the X values.
In addition to that, the operations can make use of the segments
defined on each dataset (see find-steps
and
set-segments
). If segments are defined and
/use-segments=true
, then the operations are applied
segment-by-segment, with the first point of each segment matching the
corresponding point in the other dataset. This mode is suited to
combine two datasets that are divided into logical bits (such as
chronoamperograms with steps at different potentials) whose exact
details (beginnings and duration of the steps) vary a a little.
subtract
– Subtract
subtract
buffers… /flags=
flags /mode=
choice /reversed=
yes-no /set-meta=
meta-data /style=
style /use-segments=
yes-no
Other name: S
- buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
Subtracts the last dataset from all the previous ones. Useful for standard baseline removal.
div
– Divide
div
buffers… /flags=
flags /mode=
choice /reversed=
yes-no /set-meta=
meta-data /style=
style /use-segments=
yes-no
- buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
Divides all datasets by the last one. Just as subtract
is useful
to remove one of a multicomponent response when they are additive,
div
can be used to remove one of the components when they are
multiplicative, like film loss in protein film voltammetry experiments, see
Fourmond et al, Anal. Chem. 2009
for more information.
add
– Add
add
buffers… /mode=
choice /use-segments=
yes-no
- buffers…: Datasets to add – values: comma-separated lists of datasets in the stack, see dataset lists
/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
Adds all the given datasets and pushes the result (a single dataset).
multiply
– Multiply
multiply
buffers… /mode=
choice /use-segments=
yes-no
Other name: mul
- buffers…: Datasets to add – values: comma-separated lists of datasets in the stack, see dataset lists
/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
Multiplies all the given datasets and pushes the result (a single dataset).
average
– Average
average
buffers… /count=
yes-no /mode=
choice /split=
yes-no /use-segments=
yes-no
- buffers…: Datasets to average – values: comma-separated lists of datasets in the stack, see dataset lists
/count=
yes-no: If on, a last column contains the number of averaged points for each value – values: a boolean:yes
,on
,true
orno
,off
,false
/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/split=
yes-no: If on, the datasets are automatically split into monotonic parts before averaging. – values: a boolean:yes
,on
,true
orno
,off
,false
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
In a manner similar to subtract
and
div
, the average
command averages all the
datasets given into one, with the same segment-by-segment capacities.
An additional feature of average
is its ability to first
split the datasets into monotonic parts before averaging (when /split
is on). That is the default when only one dataset is
provided. This proves useful for averaging the forward and return scan
in a cyclic voltammogram.
merge
– Merge datasets based on X values
merge
buffers… /mode=
choice /use-segments=
yes-no
- buffers…: The datasets of the operation – values: comma-separated lists of datasets in the stack, see dataset lists
/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
Merges the second dataset with the first one, and keep Y of the second
as a function of Y of the first. The algorithm for finding which point
in the second corresponds to a given one in the first is the same as
that of the other commands in this section (subtract
,
div
…).
If more than two datasets are specified, the last one gets merged with each of those before.
contract
– Group datasets on X values
contract
buffers… /contract-meta=
meta-data /mode=
choice /perp-meta=
text /use-columns=
columns /use-segments=
yes-no
- buffers…: Datasets to contract – values: comma-separated lists of datasets in the stack, see dataset lists
/contract-meta=
meta-data: Contracts all the named meta data meta-data lists – values: comma-separated list of meta-data to group into lists, see there/mode=
choice: Whether operations try to match x values or indices – values: one of:extend
,indices
,strict
,xvalues
/perp-meta=
text: defines the perpendicular coordinate from meta-data – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/use-columns=
columns: if specified, uses only the given columns for the contraction – values: a comma-separated list of columns names/use-segments=
yes-no: If on, operations are performed segment-by-segment – values: a boolean:yes
,on
,true
orno
,off
,false
contract
does the reverse of expand
, ie it regroups in
one dataset several values of Y that run against the same values of
X. The result is a dataset that contains as many Y columns as the total
of Y columns of all the arguments. X matching between the datasets is
done as for the other commands in this section (div
or
subtract
).
You can specify a column list using /use-columns
(see
above for more information about column lists), in
which case the other columns from the datasets are ignored.
If you specify one or several names of meta using the /contract-meta
option, their values will be gathered into a list of meta-data
(instead of keeping the value of the first dataset). See also
here.
Data inspection facilities
Options for data output
The commands below (and some others too, like eval
) are able to
compute a number of quantities from the datasets they work on, such as
various statistics, the position of peaks, and so on. QSoas
provides
several ways to store and work with these data.
Saving to the output file
The “traditional” way is to store the data in the output
file. They end up as TAB-separated data, with an
generally explicit header, and the name of the dataset the data is
extracted from on the first column. When outputting to the output
file, you can force the writing of extra columns
containing some meta-data by listing them using the
/meta-data=
option.
Saving as meta-data
It is also possible to use the /set-meta=
option to “decorate” the
datasets with the results of the command, as meta-data. For instance:
running
QSoas> stats /set-meta=y_min
sets the y_min
meta-data to the minimum value of the column of
the dataset. It is also possible to select several meta-data,
separating them using commas, or even change their name, such as
QSoas> stats /set-meta=y_min->my_interesting_meta
which saves also the minimum of the column as meta-data, but
this time under the name my_interesting_meta
.
You can save all the data in one go under their original name using
/set-meta=*
.
Combining /accumulate=
and pop
to create new datasets on the fly
It is now possible to generate a data from scratch using the
/accumulate=
option. This option takes an ordered list of output
values (and, possibly meta-data), and accumulates the values to a
“hidden” dataset, until the command pop
is called. For instance,
running on different datasets the following command:
QSoas> 1 /output=false /accumulate=x,y,area
will populate a dataset with 3 columns, containing respectively the X position, Y position, and area of the major peak of the datasets (with possibly extra columns for meta-data).
This command is typically used to parse a whole series of datasets
using run-for-each
or run-for-datasets
.
pop
– Pop accumulator
pop
/drop=
yes-no /status=
yes-no
/drop=
yes-no: Drop the accumulator instead of pushing it on the stack – values: a boolean:yes
,on
,true
orno
,off
,false
/status=
yes-no: Gets the status of the accumulator – values: a boolean:yes
,on
,true
orno
,off
,false
A number of commands can accumulate data to a “hidden” dataset using
the /accumulate=
options. The pop
command takes that dataset,
pushes it to the stack, and clears the “hidden” dataset.
With /drop=yes
, the “hidden” dataset is just clear, it is not pushed
onto the stack.
With /status=yes
, this command just shows the current status of the
hidden dataset.
find-peaks
– Find peaks
find-peaks
/accumulate=
meta-data /include-borders=
yes-no /meta=
meta-data /output=
yes-no /peaks=
integer /save-parameters=
file /set-meta=
meta-data /threshold=
number /which=
choice /window=
integer
/accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/include-borders=
yes-no: whether or not to include borders – values: a boolean:yes
,on
,true
orno
,off
,false
/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/peaks=
integer: Display only that many peaks (by order of intensity) – values: an integer/save-parameters=
file: a file in which to save the peak parameters as fit parameters – values: name of a file/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/threshold=
number: threshold for the peak Y value – values: a floating-point number/which=
choice: selects which of minima and/or maxima to find – values: one of:both
,max
,min
/window=
integer: width of the window – values: an integer
Find all the peaks of the current dataset. Peaks are local extrema
over a window of a number of points given by /window
(8 by default).
If /output
is on, then the peak data is written to the output
file. This function will find many peaks on noisy data, you can limit
to the first n ones by using /peaks=
n (peaks are ranked by
amplitude with respect to the average of the dataset).
By default, if a point at either end of the dataset is an extremum, it
is not included, unless you use /include-borders=true
.
Peaks are indicated on the dataset using lines, and their position is
written to the terminal. In addition, if /output
is on (off by
default), they are also written to the output file.
With the /save-parameters
option, you can save the position of the
peaks as a “fit parameter file”, which you can reload later, in a peak
fit for instance, as a help to properly set the initial values. For
this to work, you probably need to manually edit the parameters file
(with any text editor) to give the parameters the names corresponding
to the ones of the fit.
echem-peaks
– Find peaks pairs
echem-peaks
/accumulate=
meta-data /include-borders=
yes-no /meta=
meta-data /output=
yes-no /pairs=
integer /save-parameters=
file /set-meta=
meta-data /threshold=
number /which=
choice /window=
integer
/accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/include-borders=
yes-no: whether or not to include borders – values: a boolean:yes
,on
,true
orno
,off
,false
/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/pairs=
integer: Display (and output) only that many peak pairs (by order of intensity) – values: an integer/save-parameters=
file: a file in which to save the peak parameters as fit parameters – values: name of a file/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/threshold=
number: threshold for the peak Y value – values: a floating-point number/which=
choice: selects which of minima and/or maxima to find – values: one of:both
,max
,min
/window=
integer: width of the window – values: an integer
This function tries to find “pairs” of peaks that may be the anodic and cathodic peaks of a redox couple, and outputs useful information about those.
1
– Find peak
1
/accumulate=
meta-data /include-borders=
yes-no /meta=
meta-data /output=
yes-no /save-parameters=
file /set-meta=
meta-data /threshold=
number /which=
choice /window=
integer
/accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/include-borders=
yes-no: whether or not to include borders – values: a boolean:yes
,on
,true
orno
,off
,false
/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to true) – values: a boolean:yes
,on
,true
orno
,off
,false
/save-parameters=
file: a file in which to save the peak parameters as fit parameters – values: name of a file/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/threshold=
number: threshold for the peak Y value – values: a floating-point number/which=
choice: selects which of minima and/or maxima to find – values: one of:both
,max
,min
/window=
integer: width of the window – values: an integer
Equivalent to
QSoas> find-peaks /peaks=1 /output=true
2
– Find two peaks
2
/accumulate=
meta-data /include-borders=
yes-no /meta=
meta-data /output=
yes-no /save-parameters=
file /set-meta=
meta-data /threshold=
number /which=
choice /window=
integer
/accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/include-borders=
yes-no: whether or not to include borders – values: a boolean:yes
,on
,true
orno
,off
,false
/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to true) – values: a boolean:yes
,on
,true
orno
,off
,false
/save-parameters=
file: a file in which to save the peak parameters as fit parameters – values: name of a file/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/threshold=
number: threshold for the peak Y value – values: a floating-point number/which=
choice: selects which of minima and/or maxima to find – values: one of:both
,max
,min
/window=
integer: width of the window – values: an integer
Equivalent to
QSoas> find-peaks /peaks=2 /output=true
stats
– Statistics
stats
(/buffers=
)datasets /accumulate=
meta-data /for-which=
code /meta=
meta-data /output=
yes-no /set-meta=
meta-data /stats=
stats-names /use-segments=
yes-no
- (
/buffers=
)datasets (default option): datasets to work on – values: comma-separated lists of datasets in the stack, see dataset lists /accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/output=
yes-no: whether to write data to output file (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/stats=
stats-names: writes only the given stats – values: one or more name of statistics (as displayed by stats), separated by,
./use-segments=
yes-no: makes statistics segment by segment (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
stats
displays various statistics about the current dataset (or the one
specified by the /buffer
option). The following statistics are available:
buffer
,rows
,columns
,segments
: the buffer name, and the row, column and segment counts._sum
,_average
,_var
,_stddev
: the sum, the average, the variance and the standard deviation of the values of the column.all_average
,all_sum
,yall_average
,yall_sum
: the average and sum of all columns or just of the y columns_first
,_last
: the first and last values of the column._min
,_max
: the minimum and maximum values of the column._norm
: the norm of the column, that is .y_int
: the integral of the Y values over the X values._med
,_q10
,_q25
,_q75
,_q90
: the median, and the 10th, 25th, 75th and 90th percentiles._delta_min
,_delta_max
: the min and max values of the difference between two successive values.y_a
,y_b
,y_keff
: the linear regression coefficients of the Y column over X:a
is the slope andb
the value at 0, andkeff
is the effective first-order rate constant of decay to 0.
In this list, the statistics that start with _
are available for all columns
(for instance x_min
, y_min
, y2_min
, etc…), the ones that start
with y_
are only available for Y columns (such as y_int
, y2_int
,
etc…), and the other ones are global (buffer
, rows
, etc.).
These statistics are also available in Ruby code with the
name $stats
, such as $stats.x_min
.
Statistics can be written to the output file with /output=true
. If
you specify /use-segments=true
, the statistics are also displayed
segment-by-segment (and written to the output file if
/output=true
). If you want some meta-data to be written to the
output file together with the statistics, provide them as a
comma-separated list to the /meta
option, or, alternatively, use the
/meta
option of the output
command. See more about that
above.
It is possible to run stats
on several datasets by using the
/buffers=
option (possibly combined with the
/for-which
option), to extract information from a
large number of datasets. However, it should be noted that, for most
of the cases, using eval
can help you produce a much more
tailored output.
cursor
– Cursor
cursor
(interactive)
Other name: cu
Starts an interative mode (which you can end by pression q
or
Escape), in which you can position a cursor by left-clicking on the
curve, to know its exact X and Y positions.
Using the right mouse button, it is also possible to position a reference point. After that, the command also shows the difference and the ratios in X,Y coordinates between the cursor and the reference point.
Cursor positions can be saved to the output file by pressing the space bar.
Hitting u
subtracts the Y value of the current point to the Y values
of the dataset and returns. Hitting v
divides by the current Y value.
reglin
– Linear regression
reglin
(interactive)
Other name: reg
Linear regression. Using the left and right mouse buttons, select a region whose slope is of interest. The terminal shows the and parameters (the equation is ), and also the effective first order rate constant, ie the parameter of the equation
whose first-order expansion gives the same linear approximation, ie:
Using the space bar it is possible to save the values displayed in the terminal to the output file.
With the key p
, the linear regression is used as a baseline for
analyzing the first peak next to the regression (in the direction of
X values), showing the peak position, amplitude, and the half-peak
position. This is useful for analyzing electrochemical data, for
obtaining the half-wave potential.
Fits
QSoas was designed with a particular emphasis on fitting data. It allows complex fits, and in particular multi-dataset fits, when functions with shared parameters are fit to different datasets. Fits fall into two different categories:
- mono-dataset fits, ie fits that apply to one dataset, but that can be applied to several datasets at the same time with shared parameters
- multi-dataset fits, ie fits that need at least two datasets to work
Fits can be used through several commands: for all fits there are a
mfit-
fit and a sim-
fit command, and for mono-dataset fits,
there is a fit-
fit in addition.
- The
fit-
command fits a single dataset, when the fit allows that. It takes no argument - The
mfit-
command fits several datasets at the same time. It takes the numbers of the datasets it will work on. - The
sim-
command takes a saved parameters file and a series of datasets, and pushes the data computed from the parameters on the stack using the X values of the datasets given as arguments (their Y values are not used). The sim commands are described below.
All fits commands share the following options:
- With the
/extra-parameters
option, on defines additional parameters to the fit, that can be used to define parameters by formulas - Passing the name of a saved parameters file to the
/parameters
option preloads the given parameters at the beginning of the fit. - The
/set-from-meta
option makes it possible to set a value of parameters from meta-data. For instance, running a fit with/set-from-meta=v=sr
will set the value of the parameterv
to the value of the meta-datasr
(if present). Specify more of those by separating them with commas. - The
/debug
option is for debugging fits or fit engines. It takes a debug level: 0 (no debug info), 1 and 2. - Using the
/engine
, one can pre-select the fit engine for fitting (exactly like choosing it in the dialog box) - The
/window-title=
option makes it possible to select the title of the fit window, which can be useful if you’re running several fits at the same time on the same computer.
In addition to these commands, QSoas provides commands to combine fits together, to fit derivatives of the signals, and to define fits with distributions of parameters.
The fit engines now feature an “expert”, command-line, mode, which
makes it possible to run fits automatically, to set parameters using
expressions, to save “trajectories”, i.e. series of starting
parameters -> ending parameters, and to explore the parameter space
using various explorers. These features are accessible through the
following options of the fit-
and mfit-
commands:
/expert=true
activates the expert mode and allows typing commands;/script=
makes it possible to run a script file at fit startup time;/arg1=
,/arg2=
and/arg3=
can be used to give arguments to the script specified by/script=
.
The commands for the command-line interface are described below.
Sim commands
The sim-
commands are used for non-interactive computations linked
to fits. They all take a parameters file and a series of
datasets. What they do depend on the value of the /operation=
option.
- With
/operation=compute
, the default, the command computes the values predicted from the fit, as if one had used themfit-
command, loaded the parameters file, and used “Push to stack”. - With
/operation=reexport
, the command does the same as loading the parameters and then “Export to output file with errors”. /operation=subfunctions
is likecompute
, but the fit subfunctions are also computed, they are added as additional Y columns./operation=residuals
is likecompute
, but the residuals are computed, that is the difference between the original data and the function. In addition, the global variable$residuals
is set to the sum of the square of the difference.- with
/operation=annotate
, the original data is left untouched, but meta-data corresponding to the fit parameters are added (new datasets are created with the new meta-data). - with
/operation=push
, the parameters are pushed as a single dataset on the stack.
In addition to the options common to all fit commands, the sim-
commands also take an /override=
option, which provides a
possibility to change the values of the parameters with respect to the
values read from the parameter file. The syntax is a comma or
colon-separated list of assignments of the form parameter=value
or
parameter[#dataset]=value
.
Important note If you use /operation=compute
, to make sure the
generated datasets are in the same order as the original datasets, use
the /reverse=true
option.
Fit engines
QSoas provides a number of fit engines with different strengths and weaknesses. Most are based on a Levenberg-Marquardt solver, with a few variants that make some of them more useful in certain situations. The rule is, if you are unhappy with how a particular fit engine converges, try another one !
odrpack
is a very good Levenberg-Marquardt fit engine based on theODRPACK
netlib package. It is the default (and most often the best) for fitting a small number of datasets.lmder
andlmsder
are the Levenberg-Marquardt fit engines built into the GNU Scientific Library.qsoas
andmulti
are the own Levenberg-Marquardt solvers of QSoas, they are in general significantly faster than the other ones, and themulti
fit engine is optimized for massively multi-dataset fits. Don’t use anything else thanmulti
if you have more than 20 datasets with some parameters (but not all) global.simplex
is a naive implementation of the “Simplex” minimization problem. It is much faster than all the other ones, but its convergence is sometimes not very good. You may want to refine the fit using one of the Levenberg-Marquardt engines once you have found a suitable minimum using this engine.pso
is a naive implementation of the “Particle Swarm” optimizer.
Subfunctions
Some fits support displaying “sub-functions”: for instance, “peak
fits” like fit-gaussian
display each individual component in
color if there are more than one. They are documented in each
individual fitting function when applicable. They are not always
displayed by default, as in some cases, such as
fit-exponential-decay
, it generally makes the display
less clear.
To show/hide subfunctions, use “Toggle subfunction display” from within the “Data…” submenu in the fit dialog. If that item is absent, then the fit does not support subfunctions.
You can also push the individual components to the stack for further manipulation using “Data…”/”Push all subfunctions”.
Parameters restrictions
Some fits implement restrictions on the values that can be taken by
parameters. For instance, the time constants for the
exponential-decay
cannot be negative, neither for the starting
parameters, nor for any intermediate (iteration, computation of
derivatives).
This is done so that the fit algorithm does not go into directions which are assured not to give relevant parameters.
Fit manipulations
QSoas
provide a series of commands to create new fits from other
fits:
- to combine several fits together using a mathematical formula, use
combine-fits
- to fit the derivative of a fit function (possibly together with
the original function), use
define-derived-fit
- to fit a function with a distribution of one of the parameters,
use
define-distribution-fit
- to change the parameters of a fit and impose additional
restrictions on them, use
reparametrize-fit
combine-fits
– Combine fits
combine-fits
name formula fits… /redefine=
yes-no
- name: name of the new fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- formula: how to combine the various fits – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- fits…: the fits to combine together – values: a series of names of a fit (without the fit- prefix), separated by spaces
/redefine=
yes-no: If the new fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
Creates a new fit named name based on other fits, combined through a
formula. The formula use y1
, y2
and so on to refer to the
fits. You specify the fit names by removing the fit-
or mfit-
prefix. For instance, to fit a sum of lorentzians and gaussians, one
just has to do:
QSoas> combine-fits lg 'y1 + y2' lorentzian gaussian
This creates a new fit, lg
, and hence three new commands, fit-lg
,
mfit-lg
and sim-lg
. The fit is a sum of a
lorentzian fit (y1
) and a
gaussian fit (y2
). The new fit shares the
options of all the original fits.
The newly-defined fit only lasts for the current session, if you need
something more persistent, consider setting up a startup file
using startup-files
.
If you try to redefine an existing fit, the command will stop, unless
you use /redefine=true
(not by default), in which case existing
(custom) fits are silently redefined. You cannot redefine built-in fits.
define-derived-fit
– Create a derived fit
define-derived-fit
existing-fit /mode=
choice /redefine=
yes-no
- existing-fit: name of the fit to make a derived fit of – values: the name of a fit (without the fit- prefix)
/mode=
choice: Whether one fits only the derivative, both the derivative and the original data together or separated – values: one of:combined
,deriv-only
,separated
/redefine=
yes-no: Does not error out if the fit already exists – values: a boolean:yes
,on
,true
orno
,off
,false
Defines new fit commands based on existing-fit (without the fit-
prefix). It fits:
- only the derivative if
/mode=deriv-only
, in which case is is namedfit-deriv-only-
existing-fit; - a multi-dataset fit for the original function in one dataset
and the derivative in the second if
/mode=separated
(the default mode), in which case the fit is namedmfit-deriv-
existing-fit; - both the original function and the derivative in a single dataset
(the derivative is assumed to be the data after the first
discontinuity in the X values) if
/mode=combined
, in which case the new fit is namedfit-deriv-combined-
existing-fit;
This function is explained in more details in the tutorial.
define-distribution-fit
– Define fit with distribution
define-distribution-fit
name existing-fit parameter /distribution=
choice /redefine=
yes-no
- name: name of the new fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- existing-fit: name of fit to make a derived fit from – values: the name of a fit (without the fit- prefix)
- parameter: the parameter over which to integrate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/distribution=
choice: The default distribution – values: one of:gaussian
,log-uniform
,lorentzian
,uniform
/redefine=
yes-no: If the new fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
Defines a new fit called name based on the fit fit in which the data is the result of the integration of the original fit over a distribution of parameter.
You can choose the default distribution with the /distribution=
option. It is one of:
gaussian
: gaussian distribution of the parameterlorentzian
: lorentzian distribution of the parameterlog-uniform
: uniform probability between two values for the logarithm of the parameteruniform
: for a uniform probability between two values
Of course, even for theoretically infinite distributions (gaussian
and lorentzian
distributions above), QSoas
does not integrate over
the whole real axis, which is why these distributions get an extra
parameter, fixed by default, which indicates the extent of the
integration interval in dimensionless units (independent of the value
of the parameter). In principle, these values are chosen as a good
compromise between accuracy and computing time, but they can be tuned
should you need it.
The created fit commands also take a /distribution
option with the
same meaning.
Like for combine-fits
, you cannot redefine existing fits with
this command unless /redefine=true
is specified.
define-distribution
– Define new parameter distribution
define-distribution
name parameters… weight left right
- name: name of the new distribution – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- parameters…: parameters of the distribution – values: several words, separated by ‘’
- weight: expression for the weight – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- left: expression for the left boundary – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- right: expression for the right boundary – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Defines a new “distribution of parameters” to be used by
define-distribution-fit
. The arguments taken are the name of
the new distribution (like gaussian
for instance), the names of the
parameters of the distribution, the expressions giving the weight and
left and right boundaries for the integration (as a function of the
parameters of the distribution).
reparametrize-fit
– Reparametrize fit
reparametrize-fit
name fit new-parameters redefinitions… /conditions=
words /redefine=
yes-no
- name: name of the new fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- fit: the fit to modify – values: the name of a fit (without the fit- prefix)
- new-parameters: Comma-separated list of new parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- redefinitions…: a list of redefinitions, separated by
;;
– values: several words, separated by ‘;;’ /conditions=
words: Additional conditions that must be fullfilled by the parameters (Ruby code) – values: several words, separated by ‘’/redefine=
yes-no: If the new fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
This command makes it possible to reparametrize a fit: add new parameters, and express old parameters as a function of other old parameters and new ones.
For instance, to reparametrize a mono-exponential fit in terms of rate constant rather than time constant, one can use:
QSoas> reparametrize-fit my-exp exponential-decay k_1 tau_1=1/k_1
This creates a new fit named my-exp
(hence it creates the commands
fit-my-exp
, mfit-my-exp
and sim-my-exp
), in which the parameter
tau_1
of the exponential-decay
fit has been replaced by k_1
(its reciprocal).
The /conditions
option can be used to provide additional conditions
on the parameters to be fitted:
QSoas> reparametrize-fit my-exp exponential-decay k_1 tau_1=1/k_1 /conditions=A_1>3
With this command, in addition to defining a new fit as before, it
adds the condition that the parameter A_1
must be greater than 3.
By default, this command refuses to redefine an existing fit; use
/redefine=true
if that is what you want to do.
Exponential fits
There are several ways to fit exponentials to data. The simplest is
fit-exponential-decay
, which fits a decay with an arbitrary number of exponentials to the data.
fit-exponential-decay
– Fit: Multi-exponential fits
fit-exponential-decay
/absolute=
yes-no /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /exponentials=
integer /extra-parameters=
text /loss=
yes-no /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /slow=
yes-no /window-title=
text (interactive)
/absolute=
yes-no: whether the amplitude is absolute or relative to the asymptote (defaults to true) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/loss=
yes-no: wether the sum of exponentials should be multiplied by an exp(-kt) function (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/slow=
yes-no: whether there is a very slow phase (that shows up as a linear change in Y against time, defaults: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the formula below to the current dataset:
The term is only present if the /slow
option is on (it
corresponds to the parameter named slow
), and is not 0
only if /loss
is on. If /relative
is on, the parameter of the fit
is (defined by ) rather than
. /relative=true
should not be used to fit data that tend
to 0.
Subfunctions
Each individual exponential, with as asymptotic value. The subfunctions are not displayed by default.
Parameters restrictions
The values of cannot be negative, nor can .
mfit-exponential-decay
– Multi fit: Multi-exponential fits
mfit-exponential-decay
datasets… /absolute=
yes-no /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /exponentials=
integer /extra-parameters=
text /loss=
yes-no /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /slow=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/absolute=
yes-no: whether the amplitude is absolute or relative to the asymptote (defaults to true) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/loss=
yes-no: wether the sum of exponentials should be multiplied by an exp(-kt) function (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/slow=
yes-no: whether there is a very slow phase (that shows up as a linear change in Y against time, defaults: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of the
fit-exponential-decay
fit.
sim-exponential-decay
– Simulation: Multi-exponential fits
sim-exponential-decay
parameters datasets… /absolute=
yes-no /debug=
integer /engine=
engine /exponentials=
integer /extra-parameters=
text /flags=
flags /for-which=
code /loss=
yes-no /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /slow=
yes-no /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/absolute=
yes-no: whether the amplitude is absolute or relative to the asymptote (defaults to true) – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/loss=
yes-no: wether the sum of exponentials should be multiplied by an exp(-kt) function (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/slow=
yes-no: whether there is a very slow phase (that shows up as a linear change in Y against time, defaults: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the
fit-exponential-decay
fit.
fit-multiexp-multistep
– Fit: Multi-step and multi-exponential
fit-multiexp-multistep
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /exponentials=
integer /extra-parameters=
text /independent=
yes-no /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /steps=
integers /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/independent=
yes-no: Whether irreversible loss is independent on each step – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/steps=
integers: Step list with numbered conditions – values: a comma-separated list of integers/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
This fit is an extension of the exponential-decay
fit when the
experiment consists in several steps in which the time constants are
expected to change, but some may be common to different steps. The
steps are specified using the /steps
option. Specifying
/steps=0,1,0
means that there are three steps, but there are only
two distinct sets of time constants, a first one (0
, used for step 1
and 3), and a second one (1
, used only for step 2).
In each of the steps, the formula fitted to the data is:
Where is the step number, is the number of the corresponding time constants, is the beginning of the step , the are the relative amplitudes of the exponential phases, the are the asymptotic values of on each step (in the absence of film loss) and is defined recursively by and . This is done so as to keep track of film loss over the whole experiment.
Parameters restrictions
Like in the exponential-decay
fit, the values of
cannot be negative, nor can the values of .
mfit-multiexp-multistep
– Multi fit: Multi-step and multi-exponential
mfit-multiexp-multistep
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /exponentials=
integer /extra-parameters=
text /independent=
yes-no /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /steps=
integers /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/independent=
yes-no: Whether irreversible loss is independent on each step – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/steps=
integers: Step list with numbered conditions – values: a comma-separated list of integers/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
This is the multi-dataset version of the multiexp-multistep
fit.
sim-multiexp-multistep
– Simulation: Multi-step and multi-exponential
sim-multiexp-multistep
parameters datasets… /debug=
integer /engine=
engine /exponentials=
integer /extra-parameters=
text /flags=
flags /for-which=
code /independent=
yes-no /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /steps=
integers /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/exponentials=
integer: Number of exponentials – values: an integer/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/independent=
yes-no: Whether irreversible loss is independent on each step – values: a boolean:yes
,on
,true
orno
,off
,false
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/steps=
integers: Step list with numbered conditions – values: a comma-separated list of integers/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
This is the simulation command for the
multiexp-multistep
fit.
fit-linear-kinetic-system
– Fit: Several steps with a kinetic evolution
fit-linear-kinetic-system
/additional-loss=
yes-no /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /offset=
yes-no /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /steps=
words /window-title=
text (interactive)
/additional-loss=
yes-no: Additional unconstrained ‘irreversible loss’ rate constants – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/offset=
yes-no: If on, allow for a constant offset to be added – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of species – values: an integer/steps=
words: Step list with numbered conditions – values: several words, separated by ‘,’/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
This is an extension to the multiexp-multistep
fit. This fit
models the evolution of a system of chemical species that interconvert
with first-order reactions. For instance, is the rate of
production of species 2 from species 1.
Like in multiexp-multistep
, the time is divided into steps,
during which the values of the rate constants are constant. The
concentration of the species is assumed to be continuous at step
change. The fit engine solves the following differential
equations over each step (the result is a sum of exponential decays):
The step specification is just a like of “names” (numbers,
letters…), separated by commas. For each name corresponds a set of
rate constants in the equation above. For instance, with the
specification /steps=1,2,1
, there are three steps, but only two sets
of rate constants, 1
, and 2
, the first one being reused.
This fit was used for many of the publications of the team of the author of QSoas, such as Fourmond et al, Nat. Chem., 2014 or Jacques et al, BBA, 2014.
mfit-linear-kinetic-system
– Multi fit: Several steps with a kinetic evolution
mfit-linear-kinetic-system
datasets… /additional-loss=
yes-no /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /offset=
yes-no /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /steps=
words /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/additional-loss=
yes-no: Additional unconstrained ‘irreversible loss’ rate constants – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/offset=
yes-no: If on, allow for a constant offset to be added – values: a boolean:yes
,on
,true
orno
,off
,false
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of species – values: an integer/steps=
words: Step list with numbered conditions – values: several words, separated by ‘,’/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
This is the multi-dataset version of the linear-kinetic-system
fit.
sim-linear-kinetic-system
– Simulation: Several steps with a kinetic evolution
sim-linear-kinetic-system
parameters datasets… /additional-loss=
yes-no /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /offset=
yes-no /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /species=
integer /steps=
words /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/additional-loss=
yes-no: Additional unconstrained ‘irreversible loss’ rate constants – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/offset=
yes-no: If on, allow for a constant offset to be added – values: a boolean:yes
,on
,true
orno
,off
,false
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/species=
integer: Number of species – values: an integer/steps=
words: Step list with numbered conditions – values: several words, separated by ‘,’/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
This is the simulation command for the linear-kinetic-system
fit.
Polynomial fits
fit-polynomial
– Fit: Fit to a polynomial function
fit-polynomial
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /monotonic=
yes-no /number=
integer /order=
integers /parameters=
file /prefactor=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text /without-inflexions=
yes-no (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/monotonic=
yes-no: With this on, only monotonic polynomials are solutions – values: a boolean:yes
,on
,true
orno
,off
,false
/number=
integer: Number of distinct polynomial functions – values: an integer/order=
integers: Order of the polynomial functions – values: a comma-separated list of integers/parameters=
file: pre-loads parameters – values: name of a file/prefactor=
yes-no: Whether there is a prefactor for each polynomial (on by default for multiple polynomials) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/without-inflexions=
yes-no: If this is on, there are no inflexion points in the polynomials – values: a boolean:yes
,on
,true
orno
,off
,false
Fits a sum of polynomials to the data:
The number of polynomial functions is given by the /number=
option
(defaults to 1), and the order of the polynomials is chosen
using the /order=
option. By default, the order is the same for all
polynomials, to specify different ones, give a comma-separated list of
orders, one for each polynomial, and don’t use /number=
.
The prefactors , are present by default when there are
more than one polynomial, and off if not. You can override that using
the /prefactor
option.
Parameters restrictions
By default, there are no restrictions on parameters, but using
/monotonic=true
will discard parameter combinations that give
non-monotonic polynomials (each of the ,
not the sum), and with /without-inflexions=true
, it will discard
parameter combinations that give inflexion points.
mfit-polynomial
– Multi fit: Fit to a polynomial function
mfit-polynomial
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /monotonic=
yes-no /number=
integer /order=
integers /parameters=
file /perp-meta=
text /prefactor=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text /without-inflexions=
yes-no (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/monotonic=
yes-no: With this on, only monotonic polynomials are solutions – values: a boolean:yes
,on
,true
orno
,off
,false
/number=
integer: Number of distinct polynomial functions – values: an integer/order=
integers: Order of the polynomial functions – values: a comma-separated list of integers/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/prefactor=
yes-no: Whether there is a prefactor for each polynomial (on by default for multiple polynomials) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/without-inflexions=
yes-no: If this is on, there are no inflexion points in the polynomials – values: a boolean:yes
,on
,true
orno
,off
,false
This is the multidataset version of the polynomial
fit.
sim-polynomial
– Simulation: Fit to a polynomial function
sim-polynomial
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /monotonic=
yes-no /number=
integer /operation=
choice /order=
integers /override=
overrides /prefactor=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style /without-inflexions=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/monotonic=
yes-no: With this on, only monotonic polynomials are solutions – values: a boolean:yes
,on
,true
orno
,off
,false
/number=
integer: Number of distinct polynomial functions – values: an integer/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/order=
integers: Order of the polynomial functions – values: a comma-separated list of integers/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/prefactor=
yes-no: Whether there is a prefactor for each polynomial (on by default for multiple polynomials) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/without-inflexions=
yes-no: If this is on, there are no inflexion points in the polynomials – values: a boolean:yes
,on
,true
orno
,off
,false
This is the simulation command for the polynomial
fit.
Arbitrary fits
QSoas provides ways to fit arbitrary formulas (written in
Ruby) to data. While it is possible to do that on a
case-by-case basis using fit-arb
, it is also
possible to store formulas in a plain text file and load them using
load-fits
or define a new one using custom-fit
.
fit-arb
– Fit: Arbitrary fit
fit-arb
formulas /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text /with=
time-dependent parameters (interactive)
- formulas: |-separated formulas for the fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Fits formula (a piece of Ruby code) to the current dataset.
Parameters are auto-detected. Some parameters are treated specifically:
x_0
andy_0
are fixed by default and initialized to the first X or Y value of the dataset the fit applies to;temperature
is also fixed and set to the current temperature- Using
fara
counts as usingtemperature
excepted that its value is . You never getfara
as a fit parameter. dx
is fixed by default to the difference in the values of two consecutive pointsx_
i andy_
i with i a strictly positive integer are initially assumed to be evenly spread among the or range.
If you often use the same formula for fit-arb
, you should consider
using custom-fit
or writing it in a file and
loading that file with load-fits
.
Starting from QSoas
version 2.0, you can use the /with=
option to
make some of the parameters dependent on time in a flexible
fashion. See time dependent parameters
below for more information.
mfit-arb
– Multi fit: Arbitrary fit
mfit-arb
formulas datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- formulas: |-separated formulas for the fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Same as fit-arb
, but for multiple datasets.
Using mfit-arb
, it is possible to specify several formulas,
separated by |
.
If only one formula is specified, the same formula is applied to all datasets (with, as usual, the possibility to select which parameters are global or dataset-local).
If more than one formula is specified, the exact same number of datasets should be supplied; the first formula applies to the first dataset, the second formula to the second dataset, and so on… For instance, if you run:
QSoas> mfit-arb a*x+b|a*x+c|a*x+d 0 1 2
This command fits a*x+b
to dataset 0
, a*x+c
to dataset 1
and
a*x+d
to dataset 2
.
In this specific case, though, you could also have run
QSoas> mfit-arb a*x+b 0 1 2
and have a
common to all dataset, but b
dataset-specific.
sim-arb
– Simulation: Arbitrary fit
sim-arb
formulas parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style /with=
time-dependent parameters
- formulas: |-separated formulas for the fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Simulation command for
the fit-arb
fit. As for fit-arb
,
you need to specify the formula on the command-line.
load-fits
– Load fits
load-fits
file /redefine=
yes-no
- file: File containing the fits to load – values: name of a file
/redefine=
yes-no: If a fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
Load fits of arbitrary functions from a plain text file, and create
the corresponding fit-
, mfit-
and sim-
functions, that can be
used with define-derived-fit
or
combine-fits
for instance. Files should look
like this:
# Comments are allowed
michaelis: vmax/(1 + km/x)
sigm-log: log((exp(a_red*log(10.0)) +exp(a_ox*log(10.0)) * \
exp(-fara*(x-e0)))/ \
(1 + exp(-fara*(x-e0))))
Comments are allowed, as are line continuations with \
.
Like for combine-fits
, you cannot redefine existing fits with
this command unless /redefine=true
is specified.
custom-fit
– Define fit
custom-fit
name formula /redefine=
yes-no
- name: Name for the new fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- formula: Mathematical expression for the fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/redefine=
yes-no: If the fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
Directly defines a custom fit with the given name and formula. Equivalent to having a line
name: formula
in a file loaded by load-fits
.
Like for combine-fits
, you cannot redefine existing fits with
this command unless /redefine=true
is specified.
Implicit fits
QSoas provides facilities for fitting implicit equations to data, for which their is no closed form for the values but are defined by equations.
fit-implicit
– Fit: Implicit fit
fit-implicit
formula /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /iterations=
integer /parameters=
file /prec-absolute=
number /prec-relative=
number /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text /with=
time-dependent parameters (interactive)
- formula: formula for the fit (y is the variable) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/iterations=
integer: Maximum number of iterations before giving up – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Fits an implicit equation to the data. The equation is solved for the
variable y
, and it is implicitly taken to be 0. For instance:
QSoas> fit-implicit y*exp(a*y)-x
Fits the solution (in y
) of the equation to the data.
mfit-implicit
– Multi fit: Implicit fit
mfit-implicit
formula datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /iterations=
integer /parameters=
file /perp-meta=
text /prec-absolute=
number /prec-relative=
number /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- formula: formula for the fit (y is the variable) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/iterations=
integer: Maximum number of iterations before giving up – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Multidataset variant of fit-implicit
.
sim-implicit
– Simulation: Implicit fit
sim-implicit
formula parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /iterations=
integer /operation=
choice /override=
overrides /prec-absolute=
number /prec-relative=
number /reversed=
yes-no /set-meta=
meta-data /style=
style /with=
time-dependent parameters
- formula: formula for the fit (y is the variable) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/iterations=
integer: Maximum number of iterations before giving up – values: an integer/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Simulation command for the fit-implicit
fit.
define-implicit-fit
– Define implicit fit
define-implicit-fit
name formula /redefine=
yes-no
- name: Name for the new fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- formula: Mathematical expression for the implicit fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/redefine=
yes-no: If the fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
This is the equivalent of custom-fit
but for implicit
fits.
Peak fits
The fits in this section can be used to fit various “peaks” obeying to different distributions, such as the
- gaussian distribution
fit-gaussian
- lorentzian distribution
fit-lorentzian
- pseudo-Voigt distribution
fit-pseudo-voigt
For all these fits, you can specify the number of “peaks” using a
common /number
option. For each peak, there is a position, an
amplitude and a width parameter. If you are more interested in the
total surface under the peak rather than the amplitude of the peak,
the fits provide a /use-surface
argument that changes the amplitude
parameter into a surface one.
fit-gaussian
– Fit: One or several gaussians
fit-gaussian
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits a number of gaussians (and an offset), given by:
More information in the GSL documentation.
The /number
option controls the number of different peaks, while
using /use-surface=true
fits the surface of the peak instead of the
amplitude.
Subfunctions
Each individual peak, with the offset . Displayed by default.
mfit-gaussian
– Multi fit: One or several gaussians
mfit-gaussian
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of the fit-gaussian
fit.
sim-gaussian
– Simulation: One or several gaussians
sim-gaussian
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /number=
integer /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style /use-surface=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/number=
integer: number of distinct peaks (default 1) – values: an integer/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation command for the fit-gaussian
fit.
fit-lorentzian
– Fit: A Lorentzian (also named Cauchy distribution)
fit-lorentzian
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits a number of lorentzians (and an offset), given by:
More information in the GSL documentation.
The /number
option controls the number of different peaks, while
using /use-surface=true
fits the surface of the peak instead of the
amplitude.
Subfunctions
Each individual peak, with the offset . Displayed by default.
mfit-lorentzian
– Multi fit: A Lorentzian (also named Cauchy distribution)
mfit-lorentzian
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of the fit-lorentzian
fit.
sim-lorentzian
– Simulation: A Lorentzian (also named Cauchy distribution)
sim-lorentzian
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /number=
integer /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style /use-surface=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/number=
integer: number of distinct peaks (default 1) – values: an integer/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation command for the fit-lorentzian
fit.
fit-pseudo-voigt
– Fit: A pseudo-voigt distribution (linear combination of a gaussian and a lorentzian)
fit-pseudo-voigt
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits a number of pseudo-Voigt peaks, according to the following formula:
Subfunctions
Each individual peak, with the offset . Displayed by default.
mfit-pseudo-voigt
– Multi fit: A pseudo-voigt distribution (linear combination of a gaussian and a lorentzian)
mfit-pseudo-voigt
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /number=
integer /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /use-surface=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: number of distinct peaks (default 1) – values: an integer/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of the fit-pseudo-voigt
fit.
sim-pseudo-voigt
– Simulation: A pseudo-voigt distribution (linear combination of a gaussian and a lorentzian)
sim-pseudo-voigt
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /number=
integer /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style /use-surface=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/number=
integer: number of distinct peaks (default 1) – values: an integer/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-surface=
yes-no: whether to use a surface or an amplitude parameter (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation command for the fit-pseudo-voigt
fit.
Redox titration fits
fit-nernst
– Fit: Nerstian behaviour
fit-nernst
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /species-names=
words /states=
integers /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of distinct species (regardless of their redox state) – values: an integer/species-names=
words: Names of the species – values: several words, separated by ‘,’/states=
integers: Number of redox states for each species – values: a comma-separated list of integers/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the Nernst equation for a number of chemical species present
under several redox states to the dataset, that represents absorbance
(or something else) as a function of potential. The number of species
is given to the /species
option, while the number of redox states
for each species is given to the /states
option. Alternatively, if
you need distinct species with a different number of redox states, you
can specify a comma-separated list of number of states to /states
,
in which case /species
is ignored. For instance, to fit the Nersnt
equation for two species, one present in 4 redox states and the other
in two redox states, one can use:
QSoas> fit-nernst /states=4,2
The species are designated using a lowercase letter suffix, while the
redox state is designated using red
, int
or ox
when there are 3
states or less, or with a number for more than three states.
Note: be aware that if there is more than one species, the system
is intrinsically overdetermined, which is why QSoas
automatically
fixes the absorbance of the reduced species of all but the first one to
0 (but you can change that).
This fit is useful to fit the results of a the redox titration at a
single wavelength. If several wavelength are available, separate them
into several datasets as a function of the potential and fit them using
mfit-nernst
, while keeping the redox potentials
(and electron numbers) global and only the absorbances as
dataset-local.
You can also choose the names of the species, which influences the
names of the parameters, by using the /species-names=
option.
mfit-nernst
– Multi fit: Nerstian behaviour
mfit-nernst
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /species-names=
words /states=
integers /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of distinct species (regardless of their redox state) – values: an integer/species-names=
words: Names of the species – values: several words, separated by ‘,’/states=
integers: Number of redox states for each species – values: a comma-separated list of integers/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of fit-nernst
. To be used
for fitting multi-wavelength redox titrations.
sim-nernst
– Simulation: Nerstian behaviour
sim-nernst
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /species=
integer /species-names=
words /states=
integers /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/species=
integer: Number of distinct species (regardless of their redox state) – values: an integer/species-names=
words: Names of the species – values: several words, separated by ‘,’/states=
integers: Number of redox states for each species – values: a comma-separated list of integers/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the fit-nernst
fit.
Adsorbed redox species
fit-adsorbed
– Fit: Adsorbed species
fit-adsorbed
/2el=
integer /arg1=
file /arg2=
file /arg3=
file /debug=
integer /distinct=
yes-no /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /window-title=
text (interactive)
/2el=
integer: Number of true 2-electron species – values: an integer/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/distinct=
yes-no: If true (default) then all species have their own surface concentrations – values: a boolean:yes
,on
,true
orno
,off
,false
/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of 1-electron species – values: an integer/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the theoretical current given by a series of species adsorbed to an electrode in electrochemically reversible conditions to the current dataset (see for instance Laviron, J. Electroanal. Chem., 1979 for more details). The actual formula is the following:
The number of 1-electron peaks is given by the /species
option
(defaults to 1) and that of the 2-electrons peaks is given by the
/2el
option (defaults to 0).
The current for 1-electron peaks is given by:
with , with the potential of the couple and the apparent number of electrons. The latter only affects the width of the peaks, the stoechiometry is always 1 electron.
The current for the 2-electrons peaks is given by Pilchon and Laviron, J. Electronanal. Chem., 1976:
With , being the 2-electrons reduction potential (i.e. the average of those of the 1-electron couples) and , being the difference in the reduction potentials of the 1-electron couples (it is positive if the intermediate species is thermodynamically stable).
The parameters are the number of moles of the molecules
adsorbed on the electrode. If the
option /distinct=false
is used, the same value of is used
for all couples, while in the other case (the default), each couple
has its own value of (this situation corresponds to unrelated
species). is the voltammetric scan rate (in volts per second).
mfit-adsorbed
– Multi fit: Adsorbed species
mfit-adsorbed
datasets… /2el=
integer /arg1=
file /arg2=
file /arg3=
file /debug=
integer /distinct=
yes-no /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /species=
integer /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/2el=
integer: Number of true 2-electron species – values: an integer/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/distinct=
yes-no: If true (default) then all species have their own surface concentrations – values: a boolean:yes
,on
,true
orno
,off
,false
/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/species=
integer: Number of 1-electron species – values: an integer/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of the adsorbed
fit.
sim-adsorbed
– Simulation: Adsorbed species
sim-adsorbed
parameters datasets… /2el=
integer /debug=
integer /distinct=
yes-no /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /species=
integer /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/2el=
integer: Number of true 2-electron species – values: an integer/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/distinct=
yes-no: If true (default) then all species have their own surface concentrations – values: a boolean:yes
,on
,true
orno
,off
,false
/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/species=
integer: Number of 1-electron species – values: an integer/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the adsorbed
fit.
Differential equations fits
fit-ode
– Fit: Fit an ODE system
fit-ode
system /adaptive=
yes-no /arg1=
file /arg2=
file /arg3=
file /choose-t0=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /min-step-size=
number /parameters=
file /prec-absolute=
number /prec-relative=
number /script=
file /set-from-meta=
parameters-meta-data (see there) /step-size=
number /stepper=
stepper /sub-steps=
integer /voltammogram=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- system: Path to the file describing the ODE system – values: name of a file
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/parameters=
file: pre-loads parameters – values: name of a file/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Using this command, one can fit the results of integrating a system of
differential equations to a dataset. The system is a file given as the
system argument. For more details about how to specify the system of
equations, please refer to the documentation of the ode
command. The parameters whose values are not defined in the system
file become the fit parameters. If there is no optional third section
in the system file, the value of the function is by default a
linear combination of the variables of the system.
As with the kinetic-system
fit, some of the parameters of the
system can be varied automatically as a function of time, using the
/with=
option. See time dependent
parameters below for more information.
With /voltammogram=true
, the data is assumed to represent a
voltammogram, an arbitrary number of scans are supported. This has the
effect of converting the X values into time as if unwrap
was
used, but the value of the scan rate that is used as the v
parameter. It also provides the e
parameter which designates the
current value of the potential.
mfit-ode
– Multi fit: Fit an ODE system
mfit-ode
system datasets… /adaptive=
yes-no /arg1=
file /arg2=
file /arg3=
file /choose-t0=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /min-step-size=
number /parameters=
file /perp-meta=
text /prec-absolute=
number /prec-relative=
number /script=
file /set-from-meta=
parameters-meta-data (see there) /step-size=
number /stepper=
stepper /sub-steps=
integer /voltammogram=
yes-no /weight-buffers=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- system: Path to the file describing the ODE system – values: name of a file
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Multidataset version of the ode
fit.
sim-ode
– Simulation: Fit an ODE system
sim-ode
system parameters datasets… /adaptive=
yes-no /choose-t0=
yes-no /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /min-step-size=
number /operation=
choice /override=
overrides /prec-absolute=
number /prec-relative=
number /reversed=
yes-no /set-meta=
meta-data /step-size=
number /stepper=
stepper /style=
style /sub-steps=
integer /voltammogram=
yes-no /with=
time-dependent parameters
- system: Path to the file describing the ODE system – values: name of a file
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Simulation command for the ode
fit.
ODE steppers
The fits and commands that perform ordinary differential equations (ODE)
integrations, such as the kinetic-system
or ode
fits,
have a /stepper=
option that controls the stepper used, that is
the algorithm that integrates the ODE.
Steppers are divided in two main categories: the explicit steppers,
which are fast, and the implicit steppers, which are slower, but
handle much better stiff problems, when the values of the derivatives
are large (typically when using very large kinetic constants for
kinetic-system
for instance).
The explicit steppers are rk2
, rk4
, rk8pd
, rkck
and
rkf45
. We recommend the use of rkf45
when it is possible.
The implicit steppers are bsimp
, msadams
, msbdf
, rk1imp
,
rk2imp
and rk4imp
. We recommend the use of bsimp
for stiff
problems.
We refer the reader to the stepper documentation of the GSL for more information.
Kinetic systems
It is possible with QSoas to fit kinetic traces that follow the concentration of one or more species that are part of a full kinetic system. For that, you need to write a simple text file of the following form:
A <=>[k_i][k_a] I1
I1 ->[k_i2 * o2] I2
This describes a kinetic system with three species, A
, I1
and
I2
, with a reversible reaction from A
to I1
with a forward
rate of k_i
and a backward rate of k_a
, and an irreversible
reaction from I1
to I2
with a rate of k_i2 * o2
.
QSoas automatically detects the parameters from the fit, here k_i
,
k_a
, k_i2
and o2
, and the initial concentrations of A
, I1
and I2
, namely c0_A
, c0_I1
and c0_I2
. As for arbitrary fits
(fit-arb
), do not use parameters that start with a capital
letter. There is no such restriction on the name of species.
It is also possible to specify bimolecular reactions (or any molecularity):
A + B <=>[k_1][km_1] C
The rate is deduced from the rate constants as if it were an
elementary reaction, but you can use arbitrary functions of the
concentrations as rate constants (by prefixing the species name with
c_
). For Michaelis-Menten kinetics, use for instance:
S ->[k/(1 + km/c_S)] P
The files can contain comment lines starting with a #
, and can
contain an arbitrary large number of reactions.
It is possible to assign special time dependence to any of the
parameters by using the /with
option:
QSoas> fit-kinetic-system /with=o2:3,exp kinetic-file.txt
This gives to o2
the value of the sum of three exponential decays
shifted in time (see formula below); this possibility is documented in
greater detail below.
By default, the fitted value is a linear combination of all the
concentrations, with the coefficients given by parameters of name
y_A
(for the coefficient for the concentration of species A
, for
instance).
However, it is possible to include in the kinetic system file a line
starting with y =
to define a formula to be fitted. For instance,
in the file
A + B <=>[k_1][km_1] C
y = c_C**2
the function fitted is the square of the concentration of C
. The
formula can contain any arbitrary function, just like the arbitrary
fits, and can contain new parameters, and refer to the time t
and to
any of the concentrations.
Redox reactions
You can specify redox reactions this way:
A + e- <=>[e0][k0] B
By default, the model for the electron transfer is symetric
Butler-Volmer, but using /redox-type=bva
will give you the asymetric
one, with an extra parameter corresponding to the
coefficient. Using /redox-type=mhc
will use the Marcus-Hush-Chidsey
model, parametrized thus:
A + e- <=>[e0][k0][lambda0] B
As in Butler-Volmer, k0
still corresponds to the rate at zero
driving force, and lambda0
is the reorganization energy. The
computation of Marcus rate is using a fast algorithm (see
Fourmond and Leger, J. Electroanal. Chem.,
2020).
The parameter used for the electrode potential is called e
. It is
automatically taken from the data when you use the
/voltammogram=true
opion to fit-ode
or
fit-kinetic-system
, but you could maintain it constant or set
it using time dependent parameters.
To define a new fit you could combine with others using
combine-fits
, use define-kinetic-system-fit
.
fit-kinetic-system
– Fit: Full kinetic system
fit-kinetic-system
system /adaptive=
yes-no /arg1=
file /arg2=
file /arg3=
file /choose-t0=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /min-step-size=
number /parameters=
file /prec-absolute=
number /prec-relative=
number /redox-type=
choice /script=
file /set-from-meta=
parameters-meta-data (see there) /step-size=
number /stepper=
stepper /sub-steps=
integer /voltammogram=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- system: file describing the system – values: name of a file
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/parameters=
file: pre-loads parameters – values: name of a file/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/redox-type=
choice: Default type for redox reactions – values: one of:bv
,bva
,mhc
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Fits a full kinetic system.
Parameters restrictions
A rate constant cannot be negative.
mfit-kinetic-system
– Multi fit: Full kinetic system
mfit-kinetic-system
system datasets… /adaptive=
yes-no /arg1=
file /arg2=
file /arg3=
file /choose-t0=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /min-step-size=
number /parameters=
file /perp-meta=
text /prec-absolute=
number /prec-relative=
number /redox-type=
choice /script=
file /set-from-meta=
parameters-meta-data (see there) /step-size=
number /stepper=
stepper /sub-steps=
integer /voltammogram=
yes-no /weight-buffers=
yes-no /window-title=
text /with=
time-dependent parameters (interactive)
- system: file describing the system – values: name of a file
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/redox-type=
choice: Default type for redox reactions – values: one of:bv
,bva
,mhc
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Multi-dataset variant of the
kinetic-system
fit.
sim-kinetic-system
– Simulation: Full kinetic system
sim-kinetic-system
system parameters datasets… /adaptive=
yes-no /choose-t0=
yes-no /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /min-step-size=
number /operation=
choice /override=
overrides /prec-absolute=
number /prec-relative=
number /redox-type=
choice /reversed=
yes-no /set-meta=
meta-data /step-size=
number /stepper=
stepper /style=
style /sub-steps=
integer /voltammogram=
yes-no /with=
time-dependent parameters
- system: file describing the system – values: name of a file
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/choose-t0=
yes-no: If on, one can choose the initial time – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/redox-type=
choice: Default type for redox reactions – values: one of:bv
,bva
,mhc
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer/voltammogram=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/with=
time-dependent parameters: Make certain parameters depend upon time – values: several specifications of time dependent parameters (likeco:2,exp
), seperated by ‘;’. Available types: biexp, exp, ramps, rexp, steps
Simulation command for the
kinetic-system
fit.
define-kinetic-system-fit
– Define a fit based on a kinetic mode
define-kinetic-system-fit
file name /redefine=
yes-no /redox-type=
choice
- file: System to load – values: name of a file
- name: Name of the newly created fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/redefine=
yes-no: If the fit already exists, redefines it – values: a boolean:yes
,on
,true
orno
,off
,false
/redox-type=
choice: Default type for redox reactions – values: one of:bv
,bva
,mhc
In the fit-kinetic-system
fit, one has to provide
systematically the name of the file that contains the kinetic system.
This prevents the use of kinetic system fits with combine-fits
or define-derived-fit
.
The define-kinetic-system-fit
command defines a new fit for the
kinetic system contained in file. The kinetic system is read only
once, if you make modifications to the kinetic system file, they will
not be taken into account.
Like for combine-fits
, you cannot redefine existing fits with
this command unless /redefine=true
is specified.
Time-dependent parameters
Some fits, namely arb
, ode
and kinetic-system
(and all the custom fits defined using custom-fit
or
define-kinetic-system-fit
) have a built-in possibility to
have some parameters depend on time (instead of being constant). This
can be used in kinetic systems to impose an external dependence on
various parameters. It makes it possible to separate the chemistry of
the system (defined in the kinetic system file), and the experimental
procedure by which you vary the conditions (governed by the
time-dependent parameters).
The time-dependent parameters are defined using the /with=
option to
the fits. This option takes a ;
-separated list of specifications of
the form: parameter:number,type,options… where parameter
is the name of the parameter that will depend on time, type is the
type of the dependence (see below), number (not always needed) is
the number of “features” in the dependence (very type-dependent), and
options can additionnally be used for some types.
QSoas
recognizes the following time-dependences:
exp
, where the given parameter, is given by:
where is the heavyside step function (1 for positive argument,
0 else) and is the number given just after :
(in command
below, that means you will have three different steps). You may wish
to have all values common, which you do by adding ,common
in the spec:
QSoas> fit-kinetic-system /with=o2:3,exp,common kinetic-file.txt
This kind of functions were used to analyse the inhibition of NiFe hydrogenase by CO and O2, see for instance Liebgott et al, Nat. Chem. Biol., 2010.
biexp
, same asexp
but with bi-exponential decays.steps
, where the given parameter, takes a series of values (1 more than the number given) at the given times.ramps
, in which the given parameter is constant before a certain time, and then varies linearly in time to reach the next values.rexp
, that combines thesteps
andexp
: the time is divided in number segments (preceded by an initial segment in which the parameter is fixed) in which the parameter is given by:
As for exp
, the time constant can be chosen to be common to all
the segments by adding ,common
after the spec.
You can specify several independant parameters, if you separate their
description by ;
QSoas> fit-kinetic-system /with=o2:3,exp;o3:2,rexp kinetic-file.txt
This defines the dependence over time of two parameters: o2
, like
above, and o3
, that follows two exponentials relaxations.
Another way to look at the different types of time-dependent
parameters available in your version of QSoas
is to run the file
make-all.cmds
from the tests/time-dependent-parameters
directory
of the source code archive.
Synchronised parameters
It is possible to specify more than one parameter on the left of the
:
, like for instance:
QSoas> fit-kinetic-system /with=o2,o3:3,exp kinetic-file.txt
In that case, the parameters governing the time dependence (time
constant, starting time in this case) are common to both o2
and
o3
. However, the amplitude of the parameters can be tuned for each
parameter.
Slow scans fits
These specific fits were used in the context of the interpretation of cyclic voltammograms of adorbed nickel-iron hydrogenase that undergo inactivations under oxidizing conditions. For more information, refer to Abou-Hamdam et al, PNAS 2012.
fit-slow-scan-hp
– Fit: Slow scan test
fit-slow-scan-hp
/arg1=
file /arg2=
file /arg3=
file /bi-exp=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /scaling=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/bi-exp=
yes-no: whether the relaxation is bi-exponential or mono-exponential (false by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/scaling=
yes-no: if on, use an additional scaling factor (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fit for the “high-potential” part of a slow voltammetric scan where inactivation occurs with rate constants that do not depend on time. The current for the active form is assumed to depend linearly on potential.
Formula:
where is the vertex potential, is the initial potential, the rate constant of decrease, the amount of initially active enzyme, the equilibrium concentration of active species and the scan rate.
fit-slow-scan-lp
– Fit: Slow scan test
fit-slow-scan-lp
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /explicit-rate=
yes-no /extra-parameters=
text /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/explicit-rate=
yes-no: whether the scan rate is an explicit parameter of the fit (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fit for the “low-potential” part of a slow voltammetric scan where the enzyme reactivates with a rate constant that depends exponentially on the potential:
The overall formula is:
is the initial potential, the scan rate
mfit-slow-scan-hp
– Multi fit: Slow scan test
mfit-slow-scan-hp
datasets… /arg1=
file /arg2=
file /arg3=
file /bi-exp=
yes-no /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /scaling=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/bi-exp=
yes-no: whether the relaxation is bi-exponential or mono-exponential (false by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/scaling=
yes-no: if on, use an additional scaling factor (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of the
fit-slow-scan-hp
fit.
mfit-slow-scan-lp
– Multi fit: Slow scan test
mfit-slow-scan-lp
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /explicit-rate=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/explicit-rate=
yes-no: whether the scan rate is an explicit parameter of the fit (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of the
fit-slow-scan-lp
fit.
sim-slow-scan-lp
– Simulation: Slow scan test
sim-slow-scan-lp
parameters datasets… /debug=
integer /engine=
engine /explicit-rate=
yes-no /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/explicit-rate=
yes-no: whether the scan rate is an explicit parameter of the fit (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the
slow-scan-lp
fit.
sim-slow-scan-hp
– Simulation: Slow scan test
sim-slow-scan-hp
parameters datasets… /bi-exp=
yes-no /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /scaling=
yes-no /set-meta=
meta-data /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/bi-exp=
yes-no: whether the relaxation is bi-exponential or mono-exponential (false by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/scaling=
yes-no: if on, use an additional scaling factor (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the
slow-scan-hp
fit.
Wave shape fits
These fits model the catalytic wave shape of active sites with either 2 or 3 redox states, and one catalytic reaction that can be reversible. The equations for these models were initially described in Fourmond et al, JACS 2013, and were reviewed and reparametrized in Fourmond and Léger, Curr Op Electrochemistry 2017. There are 5 different fits:
fit-eci-wave
a 1-electron 1-way catalyst;fit-ecr-wave
a 1-electron 2-way (reversible) catalyst;fit-eeci-wave
a 2-electron 1-way catalyst;fit-eecr-wave
a 2-electron 2-way (reversible) catalyst;fit-eecr-relay-wave
a 2-electron 2-way (reversible) catalyst with an electron relay.
All these fits (but the eecr-relay-wave
fit) share common
options:
/model
describes the approximation used: withnersnt
, the electron transfers are at equilibrium, withslow-et
, slow electron transfer is taken into account using Butler-Volmer type of kinetics,disp-k0
is with slow electron transfer and a dispersion of values of as described in Léger et al, J. Phys. Chem. B 2002, andbd0-inf
is the special case of the former when the limiting value of the current at extreme potentials is not reached./reduction
, for irreversible models, describe the oxidative direction (by default) or the reductive direction (/reduction=true
). For reversible models, it defines the reference direction (is the limiting current an oxidation or a reduction current).
The fits of reversible models also have the following extra option:
/use-eoc
. The open circuit potential (for which the current is 0) is entirely determined by the potentials of the active site and the ratio of the catalytic rates in the two directions. Therefore, instead of using the latter ratio as a parameter, it is equivalent to use the open circuit potential./use-eoc=true
does that. See more about that in Fourmond et al, JACS 2013.
The equations for the fits differ depending on the model:
- for
nernst
: - for
slow-et
: - for
disp-k0
: - for
bd0-inf
:
In the formulas below, we use the shortcut , and is the catalytic rate in the “reference” direction, while is that in the other direction (for reversible fits).
fit-eci-wave
– Fit: Fit of an EC irreversible catalytic wave
fit-eci-wave
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the wave shape of an irreversible 1-electron catalytic cycle to the current dataset.
For the oxidative direction:
For the reductive direction:
mfit-eci-wave
– Multi fit: Fit of an EC irreversible catalytic wave
mfit-eci-wave
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /perp-meta=
text /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of the eci-wave
fit.
sim-eci-wave
– Simulation: Fit of an EC irreversible catalytic wave
sim-eci-wave
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /model=
choice /operation=
choice /override=
overrides /reduction=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the eci-wave
fit.
fit-ecr-wave
– Fit: Fit of an EC reversible catalytic wave
fit-ecr-wave
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /use-eoc=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the wave shape of a reversible 1-electron catalytic cycle to the current dataset.
For the oxidative direction:
For the reductive direction:
mfit-ecr-wave
– Multi fit: Fit of an EC reversible catalytic wave
mfit-ecr-wave
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /perp-meta=
text /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /use-eoc=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of the ecr-wave
fit.
sim-ecr-wave
– Simulation: Fit of an EC reversible catalytic wave
sim-ecr-wave
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /model=
choice /operation=
choice /override=
overrides /reduction=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style /use-eoc=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation command for the ecr-wave
fit.
fit-eeci-wave
– Fit: Fit of an EC irreversible catalytic wave
fit-eeci-wave
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the wave shape of an irreversible 2-electron catalytic cycle to the current dataset.
For the oxidative direction:
For the reductive direction:
mfit-eeci-wave
– Multi fit: Fit of an EC irreversible catalytic wave
mfit-eeci-wave
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /perp-meta=
text /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multidataset version of the eeci-wave
fit.
sim-eeci-wave
– Simulation: Fit of an EC irreversible catalytic wave
sim-eeci-wave
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /model=
choice /operation=
choice /override=
overrides /reduction=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reduction=
yes-no: if on, models a reductive wave (default: off, hence oxidative wave) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Simulation command for the eeci-wave
fit.
fit-eecr-wave
– Fit: Fit of an EEC reversible catalytic wave
fit-eecr-wave
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /use-eoc=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the wave shape of an reversible 2-electron catalytic cycle to the current dataset.
For the oxidative direction:
For the reductive direction:
mfit-eecr-wave
– Multi fit: Fit of an EEC reversible catalytic wave
mfit-eecr-wave
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /model=
choice /parameters=
file /perp-meta=
text /reduction=
yes-no /script=
file /set-from-meta=
parameters-meta-data (see there) /use-eoc=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset variant of fit-eecr-wave
.
sim-eecr-wave
– Simulation: Fit of an EEC reversible catalytic wave
sim-eecr-wave
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /model=
choice /operation=
choice /override=
overrides /reduction=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style /use-eoc=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/model=
choice: the kind of model used for the computation (default: dispersion) – values: one of:nernst
,slow-et
,bd0-inf
,disp-k0
/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reduction=
yes-no: if on, use the reductive direction as reference (default: oxidative direction as reference) – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-eoc=
yes-no: whether to use explicitly the bias or compute it using the open circuit potential (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation for the eecr-wave
fit.
fit-eecr-relay-wave
– Fit: Fit of an EECR+relay catalytic wave
fit-eecr-relay-wave
/arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /script=
file /set-from-meta=
parameters-meta-data (see there) /use-potentials=
yes-no /window-title=
text (interactive)
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-potentials=
yes-no: if on, use the potentials of the active site electronic transitions rather than the equilibrium constants – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Fits the so-called EEC with relay model in Fourmond et al, JACS 2013 to the data.
mfit-eecr-relay-wave
– Multi fit: Fit of an EECR+relay catalytic wave
mfit-eecr-relay-wave
datasets… /arg1=
file /arg2=
file /arg3=
file /debug=
integer /engine=
engine /expert=
yes-no /extra-parameters=
text /parameters=
file /perp-meta=
text /script=
file /set-from-meta=
parameters-meta-data (see there) /use-potentials=
yes-no /weight-buffers=
yes-no /window-title=
text (interactive)
- datasets…: datasets that will be fitted to – values: comma-separated lists of datasets in the stack, see dataset lists
/arg1=
file: first argument of the script file – values: name of a file/arg2=
file: second argument of the script file – values: name of a file/arg3=
file: third argument of the script file – values: name of a file/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/expert=
yes-no: runs the fit in expert mode – values: a boolean:yes
,on
,true
orno
,off
,false
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/parameters=
file: pre-loads parameters – values: name of a file/perp-meta=
text: if specified, it is the name of a meta-data that holds the perpendicular coordinates – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/script=
file: runs a script file – values: name of a file/set-from-meta=
parameters-meta-data (see there): sets parameter values from meta-data – values: comma-separated list of parameter=
meta speficiations/use-potentials=
yes-no: if on, use the potentials of the active site electronic transitions rather than the equilibrium constants – values: a boolean:yes
,on
,true
orno
,off
,false
/weight-buffers=
yes-no: whether or not to weight datasets (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/window-title=
text: defines the title of the fit window – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Multi-dataset version of the eecr-relay-wave
fit.
sim-eecr-relay-wave
– Simulation: Fit of an EECR+relay catalytic wave
sim-eecr-relay-wave
parameters datasets… /debug=
integer /engine=
engine /extra-parameters=
text /flags=
flags /for-which=
code /operation=
choice /override=
overrides /reversed=
yes-no /set-meta=
meta-data /style=
style /use-potentials=
yes-no
- parameters: file to load parameters from – values: name of a file
- datasets…: the datasets whose X values will be used for simulations – values: comma-separated lists of datasets in the stack, see dataset lists
/debug=
integer: Debug level: 0 means no debug output, increasing values mean increasing details – values: an integer/engine=
engine: The startup fit engine – values: Fit engine, one of:gsl-simplex
,lmder
,lmniel
,lmsder
,multi
,odrpack
,pso
,qsoas
,simplex
/extra-parameters=
text: defines supplementary parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/operation=
choice: Whether to just compute the function, the full jacobian, reexport parameters with errors or just annotate datasets – values: one of:annotate
,compute
,jacobian
,push
,reexport
,residuals
,subfunctions
/override=
overrides: a comma-separated list of parameters to override – values: several parameter=value assignments, separated by , or ;/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/use-potentials=
yes-no: if on, use the potentials of the active site electronic transitions rather than the equilibrium constants – values: a boolean:yes
,on
,true
orno
,off
,false
Simulation command for the
eecr-relay-wave
fit.
Commands available in the fit command-line interface
All the commands here become available by using the /expert=true
option to fit-
or mfit-
commands.
Fit engine selection
The fit command features means to select fit engines and tune their parameters.
qsoas-engine
– qsoas
qsoas-engine
/end-threshold=
number /lambda=
number /relative-min=
number /residuals-threshold=
number /scale=
number /scaling=
yes-no /trial-steps=
integer (fit command)
/end-threshold=
number: – values: a floating-point number/lambda=
number: – values: a floating-point number/relative-min=
number: – values: a floating-point number/residuals-threshold=
number: – values: a floating-point number/scale=
number: – values: a floating-point number/scaling=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/trial-steps=
integer: – values: an integer
This command selects the qsoas
fit engine, QSoas’s own
implementation of the Levenberg-Marquardt algorithm.
odrpack-engine
– odrpack
odrpack-engine
(fit command)
Selects the ODRPACK fit engine
multi-engine
– multi
multi-engine
/end-threshold=
number /global-scaling-order=
number /lambda=
number /relative-min=
number /residuals-threshold=
number /scale=
number /scaling=
yes-no /trial-steps=
integer (fit command)
/end-threshold=
number: – values: a floating-point number/global-scaling-order=
number: – values: a floating-point number/lambda=
number: – values: a floating-point number/relative-min=
number: – values: a floating-point number/residuals-threshold=
number: – values: a floating-point number/scale=
number: – values: a floating-point number/scaling=
yes-no: – values: a boolean:yes
,on
,true
orno
,off
,false
/trial-steps=
integer: – values: an integer
This command selects the multi
fit engine, the variant of the
qsoas
fit engine that is adapted for massive multidataset fits.
simplex-engine
– simplex
simplex-engine
/alpha=
number /beta=
number /delta=
number /end-threshold=
number /gamma=
number (fit command)
/alpha=
number: – values: a floating-point number/beta=
number: – values: a floating-point number/delta=
number: – values: a floating-point number/end-threshold=
number: – values: a floating-point number/gamma=
number: – values: a floating-point number
This command selects the Simplex fit engine.
gsl-simplex-engine
– gsl-simplex
gsl-simplex-engine
(fit command)
This command selects the fit engine based on the GSL version of the simplex, which may or may not be better than the Simplex depending on the function used for fitting.
pso-engine
– pso
pso-engine
/delta=
number /min-inertia=
number /particles=
integer /starting-inertia=
number (fit command)
/delta=
number: – values: a floating-point number/min-inertia=
number: – values: a floating-point number/particles=
integer: – values: an integer/starting-inertia=
number: – values: a floating-point number
This command selects the Particle Swarm Optimizer fit engine.
Fit parameters manipulation
Here are a series of commands to manipulate the value and state of parameters.
Parameter name specification
Several commands work with parameter names. In single-dataset fits, the
situation is simple, since the name
just designates the
corresponding parameter. For multi-dataset fits, one can also use
name[#0]
to only designate the parameter name for the first dataset
(the numbering starts at 0). The number of the dataset is given in the
box on the first line under the fit data, and as the column number in
the parameters spreadsheet.
fix
– Fix parameter
fix
parameter /buffers=
datasets /for-which=
code (fit command)
- parameter: the parameters to fix/unfix – values: …
/buffers=
datasets: restrict to selected datasets – values: comma-separated lists of datasets in the stack, see dataset lists/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
Fix the parameters, given by their names:
QSoas.fit> fix a
QSoas.fit> fix b[#3]
This fixes parameter a
everywher and b
only for dataset #3
(i.e. the fourth one).
It is possible to target only specific datasets using the
/buffers=
and /for-which=
options,
which have selection rule as outside of the fit interface..
unfix
– Unfix parameter
unfix
parameter /buffers=
datasets /for-which=
code (fit command)
- parameter: the parameters to fix/unfix – values: …
/buffers=
datasets: restrict to selected datasets – values: comma-separated lists of datasets in the stack, see dataset lists/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code
Same as fix
but sets the parameter free.
set
– Set parameter
set
parameter value /buffers=
datasets /expression=
yes-no /fix=
yes-no /for-which=
code /unfix=
yes-no (fit command)
- parameter: the parameters of the fit – values: …
- value: the value – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/buffers=
datasets: restrict to selected datasets – values: comma-separated lists of datasets in the stack, see dataset lists/expression=
yes-no: whether the value is evaluated as an expression – values: a boolean:yes
,on
,true
orno
,off
,false
/fix=
yes-no: if true, also fixes the parameters – values: a boolean:yes
,on
,true
orno
,off
,false
/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/unfix=
yes-no: if true, also unfixes the parameters – values: a boolean:yes
,on
,true
orno
,off
,false
Sets the value of the given parameter. With /expression=true
, the
value is interpreted as an expression that is evaluated immediately in
the context of each fit dataset, as in eval
or
apply-formula
. The /fix=true
and /unfix=true
can be used to
fix or free the parameter at the same time as setting its value.
Like in the case of fix
, it is possible to target only
specific datasets using the /buffers=
and
/for-which=
options.
set-from-dataset
– Set parameter from dataset
set-from-dataset
parameter source… (fit command)
- parameter: the parameters of the fit – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- source…: the source for the data – values: comma-separated lists of datasets in the stack, see dataset lists
Sets the value of the named parameter to the values deduced from the given dataset, matching the values corresponding to the closest perpendicular coordinate associated with the fitted datasets.
local
– Local parameter
local
parameter (fit command)
- parameter: the parameters whose global/local status to change – values: …
Makes the given parameter local to the datasets.
global
– Global parameter
global
parameter (fit command)
- parameter: the parameters whose global/local status to change – values: …
Makes the given parameter global.
save
– Save
save
file /mkpath=
yes-no /overwrite=
yes-no /rotate=
integer (fit command)
- file: name of the file for saving the parameters – values: name of a file
/mkpath=
yes-no: If true, creates all necessary directories – values: a boolean:yes
,on
,true
orno
,off
,false
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
/rotate=
integer: if not zero, performs a file rotation before writing – values: an integer
Saves the current parameters to the given file, as one would with
Ctrl+S
.
load
– Load
load
file /mode=
choice /only=
words /rename=
words (fit command)
- file: name of the file to load the parameters from – values: name of a file
/mode=
choice: – values: one of:buffer-name
,closest-Z
,normal
/only=
words: loads only the given parameters – values: several words, separated by ‘,’/rename=
words: rename parameters before setting – values: a comma-separated list of old->new parameter rename specifications
Loads the parameters from the given file.
By default, the parameters are copied from the parameters in the order in which they are to the datasets currently fitted. Parameters that are not present in the parameters file are left unchanged. Extra parameters are just ignored.
Using /mode=buffer-name
, the datasets of the parameters will be
matched based on the names of the datasets. Any dataset whose name is
not in the parameters name is ignored. Warning: exact match is
required !
Using /mode=closest-Z
, the parameters corresponding to the closest
value of the perpendicular coordinate will be chose from the
parameters file. This guarantees that all datasets will receive
parameter values (provided there are parameters in the parameters file
that are relevant for the current fit). Of course, the parameters file
had to be saved from a fit in which relevant perpendicular coordinates
had been set, which is possible either using set-perp
or using
the /perp-meta
option to the mfit-
command, and the current fit
need also to have relevant perpendicular coordinates.
You can load only specifically named parameters by passing the
comma-separated list of their names to the /only=
option.
You can also rename the parameters upon loading using the /rename=
option, which takes a old->
new specification, like for instance
this which loads the A_2
parameters from fit.params
into the A_4
parameters of the current fit.
QSoas.fit> load /rename='A_2->A_4' fit.params
Beware that /only
applies before /rename
.
show-parameters
– Show parameters
show-parameters
(fit command)
Displays a dialog box with a graphical display of the fit parameters with their respective errors.
parameters-spreadsheet
– Parameters spreadsheet
parameters-spreadsheet
(fit command)
Spawns the “parameters spreadsheet” dialog box to easily survey/edit parameters for a large number of datasets.
export
– Export parameters
export
(/file=
)file /errors=
yes-no /mkpath=
yes-no /overwrite=
yes-no (fit command)
- (
/file=
)file (default option): name of the file for saving the parameters – values: name of a file /errors=
yes-no: whether the errors are exported too – values: a boolean:yes
,on
,true
orno
,off
,false
/mkpath=
yes-no: If true, creates all necessary directories – values: a boolean:yes
,on
,true
orno
,off
,false
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
Exports the parameters to either to the file given to the /file=
option or to the output file if it is not specified.
This does the same thing as using the “Export” or “Export to output file” items of the “Parameters…” combo box.
The parameters are written line by line (a dataset is a single
line). The format looks like this, for a simple fit-arb
a*x+b
fit:
## Buffer a a_err b b_err xstart xend residuals rel_residuals overall_res overall_rel_res buffer_weight
The rows start with the dataset name, then come the values of the
parameters, one per column, along with columns for the errors if
/errors=true
was specified. Then come the lowest and highest values
of x, the residuals (including the global residuals for a multifit)
and finally the dataset weight.
reset
– Reset
reset
/source=
choice (fit command)
/source=
choice: – values: one of:backup
,initial
Resets all the parameters, either to the “backup” values (i.e. the values at the start of the last fit) or to the initial guess.
Fit trajectories
When running a fit, QSoas keeps track of all the attemps of fits since the opening of the fit dialog. A pair “starting parameters” -> “ending parameters” is called a “fit trajectory”. Here are a collection of functions to work on fit trajectories.
flag-trajectories
– Flag trajectories
flag-trajectories
/flags=
words (fit command)
/flags=
words: Flags to set on the new trajectories – values: several words, separated by ‘,’
All the subsequent fit trajectories are flagged with the flags given
as the /flags=
option, until the next call to
flag-trajectories
. Calling flag-trajectories
without the /flags
option clears the flags to add to the new
trajectories.
trim-trajectories
– Trim trajectories
trim-trajectories
threshold /at-most=
integer (fit command)
- threshold: threshold for trimming – values: a floating-point number
/at-most=
integer: keep at most that many trajectories – values: an integer
Removes from the list of trajectories all the trajectories whose final residuals are more than threshold times greater than the best final residuals.
If /at-most
is specified, it will keep only that many best
trajectories (after trimming according to threshold).
save-trajectories
– Save trajectories
save-trajectories
file /flag=
choice /mode=
choice (fit command)
- file: name of the file for saving the trajectories – values: name of a file
/flag=
choice: – values: one of: «/mode=
choice: – values: one of:fail
,overwrite
,update
Saves the trajectories into a file.
The file is a TAB-separated file, which contains each trajectory on a
single line. It contains, among other things:
* the starting parameteres (whose name finishes with
_i
);
* the final parameters (_f
) and the corresponding errors
(_err
);
* the “point residuals” buffer by buffer (point_residuals[...]
),
or the overall “point residuals” (residuals
). They are square root
of the the weighted average of the square of the difference between
the fit and the data, so they represent an “average distance”
between the fit and the data;
* the “relative residuals” (relative_res
) are the square root of
the weighted average of the square of the difference between the
fit and the data divided by the weighted average of the squares of
the data, so that they represent an “average” relative deviation;
* the weights of the buffers (buffer_weight[...]
);
* the engine
used…
load-trajectories
– Load trajectories
load-trajectories
file /mode=
choice (fit command)
- file: name of the file for saving the trajectories – values: name of a file
/mode=
choice: – values: one of:drop
,ignore
,update
Loads the trajectories from a previously saved fit trajectory file
(see save-trajectories
).
browse-trajectories
– Browse trajectories
browse-trajectories
(fit command)
Shows a dialog box with a spreadsheet to browse all the trajectories with initial and final parameters.
list-trajectories
– List trajectories
list-trajectories
/flag=
choice (fit command)
/flag=
choice: – values: one of: «
Shows a list of all the trajectories in the terminal.
sort-trajectories
– Sort trajectories
sort-trajectories
/by=
choice /reverse=
yes-no (fit command)
/by=
choice: The rules to sort – values: one of:date
,residuals
/reverse=
yes-no: Reverses the sort order – values: a boolean:yes
,on
,true
orno
,off
,false
Sorts the current list of trajectories, either by date or by
residuals, depending on the choice to the option /by
.
drop-trajectories
– Drop trajectories
drop-trajectories
trajectories (fit command)
- trajectories: trajectories to remove – values: fit Trajectories
Deletes the trajectories whose flags are given as argument.
run-for-trajectories
– Run commands
run-for-trajectories
file trajectories /add-to-history=
yes-no /cd-to-script=
yes-no /error=
choice /parameters=
choice /silent=
yes-no /sort=
choice (fit command)
- file: the script to run – values: name of a file
- trajectories: trajectories to run – values: fit Trajectories
/add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/cd-to-script=
yes-no: If on, automatically change the directory to that oof the script – values: a boolean:yes
,on
,true
orno
,off
,false
/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/parameters=
choice: which parameters to use – values: one of:final
,initial
/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/sort=
choice: whether to sort the trajectories (by resdiuals, by date) – values: one of:date
,residuals
Loops through the specified trajectories, restore their final (or
initial parameters, depending on the /parameters
option), and runs
the given script. See run
for more information about the
options.
If /sort
is given, the trajectories will first be sorted according to
the criteria (date
or residuals
) before running the script.
Miscellaneous commands
quit
– Quit
quit
(fit command)
Quits the fit window.
mem
– Memory
mem
(fit command)
Like the other mem
command, gives some information about the
memory and other resources usage of QSoas.
select
– Select
select
dataset (fit command)
- dataset: the number of the dataset in the fit (not in the stack) – values: an integer
Views the numbered dataset in the fit window. The number corresponds to the number inside the fit dialog box, not the number in QSoas’s stack.
eval
– Evaluate
eval
expression (fit command)
- expression: the expression to evaluate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Evaluates a ruby expression. The meta-data of the current
dataset are available through the $meta
variable, and the parameters
of the current datasets are available through their usual name
(including those with special characters and those starting with an
uppercase letter).
ruby-run
– Ruby load
ruby-run
file (fit command)
- file: Ruby file to load – values: name of a file
Like the other ruby-run
, loads and run a Ruby code file.
save-history
– Save history
save-history
file /overwrite=
yes-no (fit command)
- file: Output file – values: name of a file
/overwrite=
yes-no: If true, overwrite without prompting – values: a boolean:yes
,on
,true
orno
,off
,false
Like the other save-history
, saves all the commands typed into
the fit window to the given file.
run
– Run commands
run
file… /add-to-history=
yes-no /cd-to-script=
yes-no /error=
choice /only-if=
code /silent=
yes-no (fit command)
Other name: @
- file…: First is the command files, following are arguments – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/cd-to-script=
yes-no: If on, automatically change the directory to that oof the script – values: a boolean:yes
,on
,true
orno
,off
,false
/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/only-if=
code: If specified, the script is only run when the condition is true – values: a piece of Ruby code/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Like the other run
command, runs the given script. The options
and arguments are interpreted the same way as the other run
command.
run-for-each
– Runs a script for several arguments
run-for-each
script arguments… /add-to-history=
yes-no /arg2=
file /arg3=
file /arg4=
file /arg5=
file /arg6=
file /error=
choice /range-type=
choice /silent=
yes-no (fit command)
- script: The script file – values: name of a file
- arguments…: All the arguments for the script file to loop on – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg2=
file: Second argument to the script – values: name of a file/arg3=
file: Third argument to the script – values: name of a file/arg4=
file: Fourth argument to the script – values: name of a file/arg5=
file: Fifth argument to the script – values: name of a file/arg6=
file: Sixth argument to the script – values: name of a file/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/range-type=
choice: If on, transform arguments into ranged numbers – values: one of:lin
,log
/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Like the other run-for-each
, runs a script for several values
of its first parameter.
verify
– Verify
verify
expression (fit command)
- expression: the expression to evaluate – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
Does the same as the general verify
command.
fit
– Fit
fit
/iterations=
integer /trace-file=
file (fit command)
/iterations=
integer: the maximum number of iterations of the fitting process – values: an integer/trace-file=
file: a file to save the details of the fitting process – values: name of a file
Runs the fit, optionally changing the number of maximum fit iterations
through the /iterations
option.
linear-prefit
– Linear prefit
linear-prefit
/just-look=
yes-no /threshold=
number (fit command)
/just-look=
yes-no: if true, just find the linear parameters, do not adjust – values: a boolean:yes
,on
,true
orno
,off
,false
/threshold=
number: threshold under which to consider linearity – values: a floating-point number
This command determines which parameters are linear in the current fit, and runs a linear least square minimization procedure on them. This can greatly help with convergence in some cases, or simply greatly speed it up.
With /just-look=true
, this command doesn’t modify the fit
parameters, but just displays in the terminal which parameters were
found to be linear.
commands
– Commands
commands
(fit command)
Like the other commands
command, list the commands available
from within the fit prompt.
system
– System
system
command… /shell=
yes-no /timeout=
integer (fit command)
- command…: Arguments of the command – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /shell=
yes-no: use shell (on by default on Linux/Mac, off in windows) – values: a boolean:yes
,on
,true
orno
,off
,false
/timeout=
integer: timeout (in milliseconds) – values: an integer
Like the other system
command, runs an external program.
push
– Push to stack
push
/flags=
flags /recompute=
yes-no /residuals=
yes-no /reversed=
yes-no /set-meta=
meta-data /style=
style /subfunctions=
yes-no (fit command)
/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/recompute=
yes-no: whether or not to recompute the fit (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/residuals=
yes-no: if true, push the residuals rather than the computed values – values: a boolean:yes
,on
,true
orno
,off
,false
/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
/subfunctions=
yes-no: whether the subfunctions are also exported or not – values: a boolean:yes
,on
,true
orno
,off
,false
Pushes the computed function to the stack, like the fit sim-
command would do.
Parameter space exploration
QSoas now provides facilities for parameter space exploration. The idea is that QSoas will attempt several (many!) fits with different starting parameters. There are different explorers that choose new starting parameters in a different way, but all explorers can be used this way:
QSoas.fit> monte-carlo-explorer A_inf:-10..10
Selected parameter space explorer: 'monte-carlo'
Setting up monte-carlo explorator with: 20 iterations and 50 fit iterations
* A_inf[#0]: -10 to 10 lin
QSoas.fit> iterate-explorer
The first command sets up the explorer, here the
monte-carlo-explorer
, and the second iterates the explorer,
chosing new parameters and running the fits, until the number of
iterations specified by the explorer is finished.
monte-carlo-explorer
– Monte Carlo
monte-carlo-explorer
parameters… /fit-iterations=
integer /gradual-datasets=
integer /iterations=
integer /reset-frequency=
integer (fit command)
- parameters…: Parameter specification – values: several words, separated by ‘’
/fit-iterations=
integer: Maximum number of fit iterations – values: an integer/gradual-datasets=
integer: Number of starting datasets when doing gradual exploration – values: an integer/iterations=
integer: Number of monte-carlo iterations – values: an integer/reset-frequency=
integer: If > 0 reset to the best parameters every that many iterations – values: an integer
Sets up a “Monte Carlo” exploration, i.e. an exploration in which the initial parameters are chosen uniformly within given segments.
QSoas.fit> monte-carlo-explorer A_inf:-10..10 tau_1:1e-2..1e2,log
This command sets up the exploration, with the parameter A_inf
uniformly distributed between -10 and 10, and tau_1
with a log
uniform distribution between 1e-2 and 1e2. The other parameters are
left untouched from the previous fit iteration.
If /reset=
is used to specify a number above 0, all the other
parameters of the fit (the ones that are not listed in the
command-line) will be reset to the values they had at the end of the
current best fit every that many explorator iteration.
linear-explorer
– Linear ramp
linear-explorer
parameters… /fit-iterations=
integer /iterations=
integer (fit command)
- parameters…: Parameter specification – values: several words, separated by ‘’
/fit-iterations=
integer: Maximum number of fit iterations – values: an integer/iterations=
integer: Number of monte-carlo iterations – values: an integer
Linearly (or logarithmically) varies the parameter between the given range:
QSoas.fit> linear-explorer A_inf:-10..10
This command runs a number of fits with the initial value of A_inf
ranging from -10 to +10. You can specify several parameters this way,
they will be varied simultaneously (i.e. they will be linearly
correlated). Adding ,log
switches to an exponential progression.
iterate-explorer
– Iterate explorer
iterate-explorer
(/script=
)file /arg1=
file /arg2=
file /improved-script=
file /just-pick=
yes-no /linear-prefit=
yes-no /pre-script=
file (fit command)
- (
/script=
)file (default option): script file run after the iteration – values: name of a file /arg1=
file: First argument to the scripts – values: name of a file/arg2=
file: Second argument to the scripts – values: name of a file/improved-script=
file: script file run whenever the best residuals have improved – values: name of a file/just-pick=
yes-no: If true, then just picks the next initial parameters, don’t fit, don’t iterate – values: a boolean:yes
,on
,true
orno
,off
,false
/linear-prefit=
yes-no: If true, runs a linear pre-fit on before running the real fit – values: a boolean:yes
,on
,true
orno
,off
,false
/pre-script=
file: script file run after choosing the parameters and before choosing the file – values: name of a file
Runs all the iterations of the previously setup explorer. If
/just-pick=true
is specified, then just picks the parameters once,
do not run the iterations nor any fit.
The /pre-script
, /script
and /improved-script
options specify
the names of script files that will be run either after picking the
parameters but before running the fit, after the fit, or every time
the best residuals are improved. They can be given additional
arguments through the /arg1
and /arg2
options.
Computation/simulations functions
The commands in this section generate data “from scratch”, though most
require a dataset as a starting point to provide X values. You
can create a dataset for those commands using
generate-dataset
.
Evaluation functions
QSoas provides various functions to evaluate the result of mathematical operations.
eval
– Ruby eval
eval
codes… (/buffers=
)datasets /accumulate=
meta-data /for-which=
code /meta=
meta-data /modify-meta=
yes-no /output=
yes-no /set-meta=
meta-data /use-dataset=
yes-no
Other name: eval-cmd
- codes…: Any ruby code – values: several pieces of Ruby code
- (
/buffers=
)datasets (default option): Datasets to run eval on – values: comma-separated lists of datasets in the stack, see dataset lists /accumulate=
meta-data: accumulate the given data into a dataset – values: comma separated list of names of meta-data to accumulate, see here/for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/meta=
meta-data: when writing to output file, also prints the listed meta-data – values: comma-separated list of names of meta-data/modify-meta=
yes-no: Reads backs the modifications made to the $meta hash (implies /use-dataset=true) – values: a boolean:yes
,on
,true
orno
,off
,false
/output=
yes-no: whether to write data to output file (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/set-meta=
meta-data: saves the results of the command as meta-data rather than/in addition to saving to the output file – values: comma separated list of names of meta-data, ora->b
specifications, see here/use-dataset=
yes-no: If on (the default) and if there is a current dataset, the $meta and $stats hashes are available – values: a boolean:yes
,on
,true
orno
,off
,false
Evaluates the given code as a Ruby expression:
QSoas> eval 2*3
=> 6
It runs in the same environment as the
apply-formula
and the custom fits (excepted,
of course, that there are no x
and y
variables). It can be useful
to check that a function has been correctly defined in a file loaded
by ruby-run
.
Moreover, if /use-dataset
is true (the default), it can also access
the meta-data and statistics of the (as apply-formula
with
/use-meta=true
and /use-stats=true
) of the dataset:
QSoas> generate-dataset 0 10 x**3
QSoas> eval $stats.y_int
=> 2500.002505007509
You can also use this command as a calculator.
Starting from version 3.1, eval
can be used much more effectively
for data extraction from a number of datasets. It can work on several
datasets in a row using the classical /buffer
and
/for-which
options, and can use several formulas. For instance:
QSoas> eval $stats.x_max $stats.y_int /buffers=flagged:my-data /output=true
will write to the output file the max x value and the corresponding
integration over Y of all the datasets flagged my-data
. To ease the
parsing afterwards, the values can be given a name, which will be used
as a column name for the output file (and the accumulator if you chose
this):
QSoas> eval xmax:$stats.x_max my_int:$stats.y_int /buffers=flagged:my-data /output=true
This is now the recommended way to extract all kind of information from datasets.
/modify-meta=true
With the option /modify-meta=true
, it is possible to modify the
meta-data of the dataset by changing the values of the
$meta
dictionnary. It is possible to add new values. For instance,
the following command:
QSoas> eval /modify-meta=true $meta.yyy=3
is equivalent to using set-meta
this way:
QSoas> set-meta yyy 3
This option also makes it possible to modify the row and column names
by modifying the $row_names
and $col_names
variables:
QSoas> eval /modify-meta=true "$col_names[1]='my_y'"
This sets the name of the first Y column to my_y
. Watch out, this
will only work for setting row names if some row names already exist
in the given datasets.
verify
– Verify
verify
formula (/buffers=
)datasets /for-which=
code /use-dataset=
yes-no
- formula: formula – values: a piece of Ruby code
- (
/buffers=
)datasets (default option): Datasets to run verify on – values: comma-separated lists of datasets in the stack, see dataset lists /for-which=
code: Only act on datasets matching the code (see there). – values: a piece of Ruby code/use-dataset=
yes-no: If on (the default) and if there is a current dataset, the $meta and $stats hashes are available – values: a boolean:yes
,on
,true
orno
,off
,false
Evaluates the given Ruby code. If its value is false, the command fails.
This function only makes sense in scripts, to abort a script before running long computations if one detects that something went wrong. If the data you load really should only have positive X values, then you can ensure that this way:
# X values are positive
verify $stats.x_min>0
find-root
– Finds a root
find-root
formula seed (/max=
)number
- formula: An expression of 1 variable (not an equation !) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- seed: Initial X value from which to search – values: a floating-point number
- (
/max=
)number (default option): If present, uses dichotomy between seed and max – values: a floating-point number
Find the root of the given x
-dependent expression using an iterative
algorithm, using seed as the initial value. If the /max
option is
specified, then the search proceeds using dichotomy between the two
values (seed and max).
QSoas> find-root 'x**2 - 3' 1
Found root at: 1.73205
Do not use a equal sign. The returned value is that for which the expression equates 0.
integrate-formula
– Integrate expression
integrate-formula
formula a b /integrator=
choice /prec-absolute=
number /prec-relative=
number /subdivisions=
integer
- formula: An expression of 1 variable (not an equation !) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- a: Left bound of the segment – values: a floating-point number
- b: Right bound of the segment – values: a floating-point number
/integrator=
choice: The algorithm used for integration – values: one of:gauss15
,gauss21
,gauss31
,gauss41
,gauss51
,gauss61
,qng
/prec-absolute=
number: Absolute precision required for integration – values: a floating-point number/prec-relative=
number: Relative precision required for integration – values: a floating-point number/subdivisions=
integer: Maximum number of subdivisions in the integration algorithm – values: an integer
Computes the integral of the given expression of x
between bounds
a and b:
QSoas> integrate-formula x**2 10 22
Integral value: 3216 estimated error: 3.57048e-11 in 31 evaluations over 1 intervals
The available integrators are gauss
i (with i ranging from 15 to
61), which correspond to adaptive Gauss-Kronrod integrators (starting
with i evaluations), and qng
, which is a non-adaptive
Gauss-Kronrod integrator. See the documentation of the GNU Scientific
Library
for more information.
mintegrate-formula
– Integrate expression
mintegrate-formula
formula a b /integrator=
choice /max-evaluations=
integer /prec-absolute=
number /prec-relative=
number
- formula: An expression of x and z – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- a: Lower Z value – values: a floating-point number
- b: Upper Z value – values: a floating-point number
/integrator=
choice: The algorithm used for integration – values: one of:akima
,csplines
,gk15
,gk21
,gk31
,gk41
,gk51
,gk61
,naive
/max-evaluations=
integer: Maximum number of function evaluations – values: an integer/prec-absolute=
number: Absolute precision required for integration – values: a floating-point number/prec-relative=
number: Relative precision required for integration – values: a floating-point number
This command takes a function of and , two numbers, and , and computes, for each value of of the current dataset, the integral:
This command uses the same algorithms for integration as the fits
created by define-distribution-fit
.
generate-dataset
– Generate dataset
generate-dataset
start end (/formula=
)words /columns=
integer /flags=
flags /log=
yes-no /name=
text /number=
integer /reversed=
yes-no /samples=
integer /set-meta=
meta-data /style=
style
Other name: generate-buffer
- start: The first X value – values: a floating-point number
- end: The last X value – values: a floating-point number
- (
/formula=
)words (default option): Formula to generate the Y values – values: several words, separated by ‘’ /columns=
integer: number of columns of the generated datasets – values: an integer/flags=
flags: Flags to set on the newly created datasets – values: a comma-separated list of flags/log=
yes-no: uses logarithmically spaced X values – values: a boolean:yes
,on
,true
orno
,off
,false
/name=
text: The name of the newly generated bufffers (may include a %d specification for the number) – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “/number=
integer: generates that many datasets – values: an integer/reversed=
yes-no: Push the datasets in reverse order – values: a boolean:yes
,on
,true
orno
,off
,false
/samples=
integer: number of data points – values: an integer/set-meta=
meta-data: Meta-data to add to the newly created datasets – values: one or more meta=value assignements/style=
style: Style for the displayed curves – values: one of:brown-green
,red-blue
,red-green
,red-to-blue
,red-yellow-green
Generates a dataset with samples samples (by default 1000) uniformly spaced between start and end.
If formula is provided, it sets Y values according to this formula (else Y is take equal to X).
QSoas> generate-dataset -10 10 sin(x)
Simulation functions
kinetic-system
– Kinetic system evolver
kinetic-system
reaction-file parameters /adaptive=
yes-no /annotate=
yes-no /dump=
yes-no /min-step-size=
number /prec-absolute=
number /prec-relative=
number /step-size=
number /stepper=
stepper /sub-steps=
integer
- reaction-file: File describing the kinetic system – values: name of a file
- parameters: Parameters of the model – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
/adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/annotate=
yes-no: If on, a last column will contain the number of function evaluation for each step (default false) – values: a boolean:yes
,on
,true
orno
,off
,false
/dump=
yes-no: if on, prints a description of the system rather than solving (default: false) – values: a boolean:yes
,on
,true
orno
,off
,false
/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer
Simulates the evolution over time of the kinetic system given in the reaction-file (see the section kinetic system for the syntax of the reaction files).
This commands will use the current dataset as a source for X values.
The result is a multi-column dataset containing the concentration of all the species in the different columns.
parameters is a list of assignments evaluated at the beginning of the
time evolution to set the parameters of the system. (all parameters
not set this way default to 0). This list is evaluated as
Ruby code, so you should separate the assignments with ;
.
For instance, if the reaction file (system.sys
) contains:
A <=>[ki][ka] I
You can run the following commands to simulate the time evolution of
the system with initial concentration of A equal to 1 (the parameter
c0_A
), of I equal to 0 (the parameter c0_I
, here not specified so
assumed to be 0) and with ki
and ka
equal to 1:
QSoas> generate-dataset 0 10
QSoas> kinetic-system system.sys 'c0_A = 1;ka = 1; ki = 1'
ode
– ODE solver
ode
file (/parameters=
)text /adaptive=
yes-no /annotate=
yes-no /dump=
yes-no /min-step-size=
number /prec-absolute=
number /prec-relative=
number /step-size=
number /stepper=
stepper /sub-steps=
integer
- file: File containing the system – values: name of a file
- (
/parameters=
)text (default option): Values of the parameters – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ /adaptive=
yes-no: whether or not to use an adaptive stepper (on by default) – values: a boolean:yes
,on
,true
orno
,off
,false
/annotate=
yes-no: If on, a last column will contain the number of function evaluation for each step – values: a boolean:yes
,on
,true
orno
,off
,false
/dump=
yes-no: If on, do not integrate, just dumps the parse contents of the ODE file – values: a boolean:yes
,on
,true
orno
,off
,false
/min-step-size=
number: minimum step size for the stepper – values: a floating-point number/prec-absolute=
number: absolute precision required – values: a floating-point number/prec-relative=
number: relative precision required – values: a floating-point number/step-size=
number: initial step size for the stepper – values: a floating-point number/stepper=
stepper: algorithm used for integration (default: rkf45) – values: ODE stepper algorithm, one of:bsimp
,msadams
,msbdf
,rk1imp
,rk2
,rk2imp
,rk4
,rk4imp
,rk8pd
,rkck
,rkf45
/sub-steps=
integer: If this is not 0, then the smallest step size is that many times smaller than the minimum delta t – values: an integer
ode
solves ordinary differential equations. The equation definition
file is structured in three parts, separated by at least one fully
blank line, the last one being optional.
The first section defines the “initial conditions”; there are as many integrated variables as there are lines in this section. This section is only evaluated once at the beginning of the integration.
The second section defines the derivatives; they are evaluated several times for each time step.
The third is optional and is described further below.
Here is the contents of the file (say sine.ode
) one would use to
obtain and as solutions.
sin = 0
cos = 1
d_sin = cos
d_cos = -sin
Important Make sure that at least one fully blank line separates
the definition of the initial values and the definition of the
derivatives. Make sure also that to each variable defined in the first
section corresponds a derivative in the second, starting with d_
.
After running the commands:
QSoas> generate-dataset 0 10
QSoas> ode sine.ode
One has a dataset with one X column (representing the values), and two Y columns, and (in the order in which they are given in the “initial conditions” section).
The optional third section can be used to control the exact output of the program. The above example can be completed thus:
sin = 0
cos = 1
d_sin = cos
d_cos = -sin
[sin, cos, sin**2 + cos**2]
Using this gives 3 Y columns: , and . The latter should hopefully be very close to 1.
Details of the integrations procedures can be tweaked using the parameters:
/stepper
: the ODE stepper algorithm. You can find more about them in the GSL documentation.rkf45
is the standard Runge-Kutta-Feldberg integrator, and is the default choice. If QSoas complains that it has difficulties to integrate and that you should try implicit solvers (because your system is too stiff, then tryrk4imp
,bsimp
,msadams
ormsbdf
./prec-relative
and/prec-absolute
control the precision. A step will be deemed precise enough if the error estimate is smaller than either the relative precision or the absolute precision/adaptive
controls whether an adaptive step size is used (the values of in the resulting dataset are always those asked, but there may be more intermediate steps). You should seldom need to turn it off.
If /annotate
is on, a last column is added that contains the number
of the evaluations of derivatives for each step (useful for
understanding why an integration takes so long, for instance).
The system of equations may contain undefined variables; one could have for instance used:
d_sin = omega * cos
d_cos = -omega * sin
Their values are set to 0 by default. You can change their values
using the /parameters
option:
QSoas> ode sine.ode /parameters="omega = 3"
Scripting facilities
QSoas provides facilities for scripting, ie running commands unattended, for instance for preparing series of data files for fitting or further use. The following commands are useful only in this context.
Scripting commands
run
– Run commands
run
file… /add-to-history=
yes-no /cd-to-script=
yes-no /error=
choice /only-if=
code /silent=
yes-no
Other name: @
- file…: First is the command files, following are arguments – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/cd-to-script=
yes-no: If on, automatically change the directory to that oof the script – values: a boolean:yes
,on
,true
orno
,off
,false
/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/only-if=
code: If specified, the script is only run when the condition is true – values: a piece of Ruby code/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Run commands saved in a file. If a compulsory argument is missing, QSoas will prompt the user.
Arguments following the name of the script are passed to the script as
“special variables” ${1}
, and ${2}
etc.
Imagine you are often doing the same processing a given type of data
files, say, simply filtering them. You just have to write a script
process.cmd
containing:
load ${1}
auto-filter-fft
And run it this way:
QSoas> run process.cmd data_file.dat
or
QSoas> @ process.cmd data_file.dat
If you use run
regularly, you may be interested in the other
scripting commands, such as run-for-each
,
run-for-datasets
and startup-files
If the /only-if=
condition option is specified, the script will only be
executed if the condition is true. The condition has the same
behaviour as that for the verify
command.
Advanced use of script parameters
If you want to manipulate the arguments or provide defaut values for some of them, you can use the following syntax:
${2%%suffix}
will be replaced by parameter 2 with the suffix “suffix
” removed, or simply parameter 2 if it does not end with “suffix
”.${2##prefix}
will be replaced by parameter 2 with the prefix “prefix
” removed, or simply parameter 2 if it does not start with “prefix
”.${2:-value}
: this will be replaced by parameter 2 if it has been provided to the script, or by “value
” if it has not been provided.${2:+value}
: this will be replaced by “value
” if parameter 2 has been provided to the script, or by nothing if that is not the case (the value of parameter 2 is not used).${2?yes:no}
: this will be replaced by “yes
” if parameter 2 has been provided to the script, or by “no
” if that is not the case.
Error handling
It is possible to change how the script handles errors using the
/error
option, which can take the following values:
abort
(the default behaviour): when a command in the script fails, the script stops executing, and the control comes back to either the command-line or the calling script. In the latter case, this behaviour is not considered as an error (i.e. the calling script does not abort);ignore
: if a command in the script fails, the script keeps on running;except
: as inabort
, but this is considered as an error, so this may also stop the calling script;delete
: as inabort
, but all the datasets generated during the execution of this script are removed from the stack.
let
– Define a named parameter
let
name value
- name: the name of the parameter – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
- value: the value of the parameter – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “
let
makes it possible to define “named parameters” that can be
reused inside scripts. For instance:
QSoas> let max 100
QSoas> generate-dataset 0 ${max}
They can also be used in more elaborated fashions like the normal script parameters, see there.
Warning parameter expansion only works inside scripts. Typing directly the above commands in the command prompt will yield an error.
startup-files
– Startup files
startup-files
(/add=
)file /rm=
integer /run=
yes-no
- (
/add=
)file (default option): adds the given startup file – values: name of a file /rm=
integer: removes the numbered file – values: an integer/run=
yes-no: if on, runs all the startup files right now (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
This command instructs QSoas
to execute command files
at startup. Without options, it displays the list of command
files that QSoas
will read at the next startup.
Files given to the /add
options are added at the end of the list.
To remove a file from the list, obtain its number by running startup-files without any option, then use startup-files again with the option /rm=.
You can re-run all startup files by running:
QSoas> startup-files /run=true
run-for-each
– Runs a script for several arguments
run-for-each
script arguments… /add-to-history=
yes-no /arg2=
file /arg3=
file /arg4=
file /arg5=
file /arg6=
file /error=
choice /range-type=
choice /silent=
yes-no
- script: The script file – values: name of a file
- arguments…: All the arguments for the script file to loop on – values: one or more files. Can include wildcards such as *,
[0-4]
, etc… /add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg2=
file: Second argument to the script – values: name of a file/arg3=
file: Third argument to the script – values: name of a file/arg4=
file: Fourth argument to the script – values: name of a file/arg5=
file: Fifth argument to the script – values: name of a file/arg6=
file: Sixth argument to the script – values: name of a file/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/range-type=
choice: If on, transform arguments into ranged numbers – values: one of:lin
,log
/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Runs the given script file successively for each argument given. For instance, running:
QSoas> run-for-each process-my-file.cmds file1 file2 file3
Is equivalent to running successively
QSoas> @ process-my-file.cmds file1
QSoas> @ process-my-file.cmds file2
QSoas> @ process-my-file.cmds file3
The arguments may not be file names, although automatic completion
will only complete file names. If the script you want to run requires
more than one argument, you can specify them (for all the runs) using
the options /arg2
, /arg3
and so on:
QSoas> run-for-each process-my-file.cmds /arg2=other file1 file2
Is equivalent to running:
QSoas> @ process-my-file.cmds file1 other
QSoas> @ process-my-file.cmds file2 other
If you specify either /range-type=lin
or /range-type=log
, the
parameters are interpreted differently, and are expected to be of the
type 1..10:20
, which means 20 numbers between 1 and 10 (inclusive),
that are spaced either linearly or logarithmically, depending on the
value of the option.
The /error=
option controls how the scripts handle errors. See
run
for more information.
run-for-datasets
– Runs a script for several datasets
run-for-datasets
script datasets… /add-to-history=
yes-no /arg1=
file /arg2=
file /arg3=
file /arg4=
file /arg5=
file /arg6=
file /error=
choice /silent=
yes-no
- script: The script file – values: name of a file
- datasets…: All the arguments for the script file to loop on – values: comma-separated lists of datasets in the stack, see dataset lists
/add-to-history=
yes-no: whether the commands run are added to the history (defaults to false) – values: a boolean:yes
,on
,true
orno
,off
,false
/arg1=
file: First argument to the script – values: name of a file/arg2=
file: Second argument to the script – values: name of a file/arg3=
file: Third argument to the script – values: name of a file/arg4=
file: Fourth argument to the script – values: name of a file/arg5=
file: Fifth argument to the script – values: name of a file/arg6=
file: Sixth argument to the script – values: name of a file/error=
choice: Behaviour to adopt on error – values: one of:abort
,delete
,except
,ignore
/silent=
yes-no: whether or not to switch off display updates during the script (off by default) – values: a boolean:yes
,on
,true
orno
,off
,false
Runs the given script file for each of the datasets given. Before each
invocation of the script, the dataset is pushed back to the top of the
stack, as if by fetch
.
The /error=
option controls how the scripts handle errors. See
run
for more information.
noop
– No op
noop
(/*=
)words
- (
/*=
)words (default option): Ignored options – values: several words, separated by ‘’
Does nothing (no operation).
This command can be combined with the advanced argument
uses described in run
to conditionally execute some commands.
pause
– Pause
pause
(/message=
)text /time=
number
- (
/message=
)text (default option): the message to display – values: arbitrary text. If you need spaces, do not forget to quote them with ‘ or “ /time=
number: time to pause for, in seconds – values: a floating-point number
This commands temporarily stops the execution of a script either
displaying the message given or for a certain time (if the /time=
option is used).
Non-interactive commands
In addition to purely scripting commands, many commands do not require user interaction, provided all their arguments are given. They are listed here:
1
2
add
add-noise
apply-formula
auto-correlation
auto-filter-bs
auto-filter-fft
auto-flag
auto-reglin
average
bin
break
browse
cat
cd
chop
clear
clear-segments
clear-stack
combine-fits
commands
comment
contour
contract
credits
custom-fit
dataset-options
debug
define-alias
define-derived-fit
define-distribution
define-distribution-fit
define-implicit-fit
define-kinetic-system-fit
diff
diff2
display-aliases
div
downsample
drop
dx
dy
echem-peaks
edit
eval
expand
fetch
find-peaks
find-root
find-steps
flag
generate-dataset
graphics-settings
head
help
hide-buffer
integrate
integrate-formula
interpolate
kernel-filter
kinetic-system
let
limits
linear-least-squares
load
load-as-chi-txt
load-as-csv
load-as-eclab-ascii
load-as-parameters
load-as-text
load-fits
load-stack
ls
mem
merge
mintegrate-formula
multiply
noop
norm
ode
output
overlay
overlay-buffer
pause
points
pop
print
pwd
quit
record-meta
redo
remove-spikes
rename
reparametrize-fit
reverse
rotate
ruby-run
run
run-for-datasets
run-for-each
save
save-datasets
save-history
save-meta
save-output
save-stack
segments-chop
set-column-names
set-meta
set-perp
set-row-names
shiftx
show
show-stack
sim-adsorbed
sim-arb
sim-eci-wave
sim-ecr-wave
sim-ecro-wave
sim-eeci-wave
sim-eecr-relay-wave
sim-eecr-wave
sim-eecro-wave
sim-exponential-decay
sim-gaussian
sim-implicit
sim-kinetic-system
sim-linear-kinetic-system
sim-lorentzian
sim-multiexp-multistep
sim-nernst
sim-ode
sim-polynomial
sim-pseudo-voigt
sim-slow-scan-hp
sim-slow-scan-lp
solve
sort
sort-datasets
split-monotonic
split-on-values
splita
splitb
startup-files
stats
strip-if
subtract
system
temperature
timer
tips
transpose
tweak-columns
undo
unflag
unwrap
verify
version
zero
Mathematical formulas using Ruby
QSoas internally uses Ruby (or more precisely its embedded version, mruby) for the interpretation of all formulas. This means in particular that all formulas must be valid ruby code.
Basically, the Ruby syntax ressembles that of other symbolic evaluation programs (it is quite close to the one from gnuplot), with the following restrictions:
- Parameter names cannot start with an uppercase letter, as those have a special meaning to the Ruby interpreter: anything that starts with an uppercase letter is assumed to be a constant.
- Don’t abbreviate floating point numbers:
2.
and.4
are invalid, use2.0
and0.4
instead. - Case matters:
Pi
is , whilepi
is nothing defined. - Exponentiation is done with the
**
operator. The^
operator is used for binary XOR. - Logical OR is done with the
||
operator and logical AND with the&&
operator. The single-letter versions,|
and&
are binary operators and will not work as you intend.
For instance:
QSoas> eval 2+2
=> 4
QSoas> eval 2**8
=> 256
QSoas> eval sin(0.5*PI)
=> 1
QSoas> eval sin(0.25*PI)
=> 0.70710678118655
The last examples take advantage of the definition of the constant
PI
.
Define global variables
Using Ruby, it is possible to define local and global variables. Local
variables have to start with a lowercase and they can just be defined
by using an =
sign. For instance:
QSoas> eval x=2;x**8
=> 256
In this example, a variable called x
is defined to be equal to 2;
it’s 8th power is computed afterwards. The ;
separates two
instructions.
The value of x
is lost as soon as the command is finished:
QSoas> eval x
Error: A ruby exception occurred: (eval):1: undefined method 'x'
(NoMethodError)
To create persistent storage, you can use a global variable, which
looks like a local one excepted that its name must be preceded by a
$
sign:
QSoas> eval $x=2
=> 2
QSoas> eval $x**8
=> 256
In this case, the value of $x
is not lost between the various calls
to eval
. It is in fact persistent over the whole duration of the
QSoas session (but is forgotten once you close the program), and it
can be used in other expressions as well (the other commands using
Ruby code, see below.
Some commands also produce global variables in Ruby that you can use
later using eval
, for instance.
Boolean expressions
In Ruby, it is also possible to perform the usual tests:
QSoas> eval $x=2
=> 2
QSoas> eval $x>3
=> false
The normal comparisons are available: <
, >
, <=
, >=
. To test
for equality, use ==
. If you need to chain several tests, use the
following operators:
- logical or:
||
, which will be true if either condition is true; - logical and:
&&
, which will be true only if both conditions are true.
For instance:
QSoas> eval ($x>5)||($x<3)
=> true
QSoas> eval ($x>5)&&($x<3)
=> false
Using ruby code
Ruby code can be used in several contexts:
- in
eval
, one can make “general computations”, which can refer to “global properties” of the current dataset, like its meta data or statistics (or of other datasets too); - in the
/for-which
options that take a boolean expression to select datasets from a list, using their “global properties”; - in
apply-formula
, the formula is applied to each row of a dataset, possibly modifying the values; - in
strip-if
the formula is also applied to each row of a dataset, but this time to evaluate if a row is kept (false
) or removed (true
); - in the
arb
fits to specify the function (of the variablex
) to fit; - in many other places too.
Special variables
Most ruby expressions can make use of dataset information, such as meta-data or statistics (see the documentation of the specific command for more information about how to make this available):
- the special variable
$stats
allow access to the statistics, as given bystats
. - the special variable
$meta
gives access to the meta-data.
For instance, to subtract the average to the y column:
QSoas> apply-formula y-=$stats.y_average
To show the name of the original file of the current dataset.
QSoas> eval $meta.original_file
Auto completion is able to complete the $stats
or $meta
completions.
Complex numbers
QSoas includes now a limited support for handling complex
numbers. While the contents of the datasets can only be series of
real numbers, all the ruby formulas can define and use complex
numbers. You can create complex numbers using the Cplx(real, imag)
function, or just using I
:
QSoas> eval Cplx(1,2)
=> (1+2*I)
QSoas> eval (I+2)**2
=> (3+4*I)
The trigonometric functions accept complex numbers as arguments:
QSoas> eval exp(2+PI*I/4)
=> (5.22485+5.22485*I)
To convert back to real values, you can use the .real
or the .imag
methods to get the real or imaginary parts, or the abs
function
which returns the module of the complex number, or the arg
function
which returns the argument (all this in radians).
For instance, you can generate a spiral using:
QSoas> generate-dataset -20 20
QSoas> apply-formula z=exp((0.1+PI*I)*x);x=z.real;y=z.imag
Applying formula 'z=exp((0.1+PI*I)*x);x=z.real;y=z.imag' to buffer generated.dat
Special functions
In addition to standard mathematical functions from the Math
module (that contains, among others, the error function
erf
), the following special functions are available:
-
abs(x)
: $$\leftx\right $$, works on complex numbers too airy_ai(x)
: Airy Ai function . Precision to about . Other variants available:airy_ai_fast
is faster, (precision ) andairy_ai_double
slower, (precision ). (more information there)airy_ai_deriv(x)
: First derivative of Airy Ai function . Precision to about . Other variants available:airy_ai_deriv_fast
is faster, (precision ) andairy_ai_deriv_double
slower, (precision ). (more information there)airy_bi(x)
: Airy Bi function . Precision to about . Other variants available:airy_bi_fast
is faster, (precision ) andairy_bi_double
slower, (precision ). (more information there)airy_bi_deriv(x)
: First derivative of Airy Bi function . Precision to about . Other variants available:airy_bi_deriv_fast
is faster, (precision ) andairy_bi_deriv_double
slower, (precision ). (more information there)arg(x)
: , the argument of the complex numberatanc(x)
:atanhc(x)
:bessel_j0(x)
: Regular cylindrical Bessel function of 0th order, (more information there)bessel_j1(x)
: Regular cylindrical Bessel function of first order, (more information there)bessel_jn(x,n)
: Regular cylindrical Bessel function of n-th order, (more information there)bessel_y0(x)
: Irregular cylindrical Bessel function of 0th order, (more information there)bessel_y1(x)
: Irregular cylindrical Bessel function of first order, (more information there)bessel_yn(x,n)
: Irregular cylindrical Bessel function of n-th order, (more information there)clausen(x)
: Clausen integral, (more information there)dawson(x)
: Dawson integral,debye_1(x)
: Debye function of order 1, (more information there)debye_2(x)
: Debye function of order 2, (more information there)debye_3(x)
: Debye function of order 3, (more information there)debye_4(x)
: Debye function of order 4, (more information there)debye_5(x)
: Debye function of order 5, (more information there)debye_6(x)
: Debye function of order 6, (more information there)dilog(x)
: The dilogarithm, (more information there)exp(x)
: , works on complex numbers tooexpint_e1(x)
: Exponential integralexpint_e2(x)
: Exponential integralexpint_en(x,n)
: Exponential integralfermi_dirac_0(x)
: Complete Fermi-Dirac integral (index 0), (more information there)fermi_dirac_1(x)
: Complete Fermi-Dirac integral (index 1), (more information there)fermi_dirac_2(x)
: Complete Fermi-Dirac integral (index 2), (more information there)fermi_dirac_3half(x)
: Complete Fermi-Dirac integral (index 3/2) (more information there)fermi_dirac_half(x)
: Complete Fermi-Dirac integral (index 1/2) (more information there)fermi_dirac_m1(x)
: Complete Fermi-Dirac integral (index -1), (more information there)fermi_dirac_mhalf(x)
: Complete Fermi-Dirac integral (index -1/2) (more information there)fermi_dirac_n(x,n)
: Complete Fermi-Dirac integral of index , (more information there)gamma(x)
: The Gauss gamma function (more information there)gamma_inc(a,x)
: Incomplete gamma function (more information there)gamma_inc_p(a,x)
: Complementary normalized incomplete gamma function (more information there)gamma_inc_q(a,x)
: Normalized incomplete gamma function (more information there)gaussian(x,sigma)
: Normalized gaussian:gsl_erf(x)
: Error function – GSL version (more information there)gsl_erfc(x)
: Complementary error function (more information there)hyperg_0F1(c,x)
: Hypergeometric function (more information there)hyperg_1F1(a,b,x)
: Hypergeometric function (more information there)hyperg_U(a,b,x)
: Hypergeometric function (more information there)k_mhc(lambda, eta)
: Marcus-Hush-Chidsey integral . Single precision, computed using the fast trapezoid method. (more information there)k_mhc_double(lambda, eta)
: Marcus-Hush-Chidsey integral . Double precision, computed using the series by Bieniasz, JEAC 2012. (more information there)k_mhc_n(lambda, eta)
: Approximation to the Marcus-Hush-Chidsey integral described in Nahir, JEAC 2002, (more information there)k_mhc_z(lambda, eta)
: Approximation to the Marcus-Hush-Chidsey integral described in Zeng et al, JEAC 2014, (more information there)lambert_W(x)
: Principal branch of the Lambert function (more information there)lambert_Wm1(x)
: Secondary branch of the Lambert function (more information there)landau(x)
: Probability density of the Landau distribution, (more information there)ln_erfc(x)
: Logarithm of the complementary error function (more information there)ln_gamma(x)
: The logarithm of the gamma function (more information there)log(x)
: , works on complex numbers toolog1p(x)
: , but accurate for close to 0lorentzian(x,gamma)
: Normalized lorentzian:pseudo_voigt(x, w, mu)
: Pseudo-Voigt function, defined by:psi(x)
: Digamma function: (more information there)psi_1(x)
: Trigamma function: (more information there)psi_n(x, n)
: Polygamma function: (more information there)trumpet_bv(m, alpha, prec)
: Position of an oxidative adsorbed 1-electron peak. is the coefficient defined by Laviron, the value is returned in units ofweibull(x,a,b)
: Probability of the Weibull distribution (more information there)
Physical constants
Some physical/mathematical constants are available; their name starts with an uppercase letter.
Alpha
: The fine structure constant, – 0.00729735C
: The speed of light in vacuum, – 2.99792e+08Eps_0
: The permittivity of vacuum, – 8.85419e-12F
: Faraday’s constant, – 96485.3H
: The Planck constant, – 6.62607e-34Hbar
: – 1.05457e-34Kb
: Boltzmann’s constant – 1.38065e-23M_e
: The mass of the electron, – 9.10938e-31M_mu
: The mass of the mu, – 1.88353e-28M_n
: The mass of the neutron, – 1.67493e-27M_p
: The mass of the proton, – 1.67262e-27Mu_0
: The permeability of vacuum, – 1.25664e-06Mu_B
: The Bohr Magneton, – 9.27401e-24Na
: The Avogadro number, – 6.02214e+23Pi
,PI
: – 3.14159Q_e
: The absolute value of the charge of the electron, – 1.60218e-19R
: Molar gas constant, – 8.31447Ry
: The Rydberg constant, – 2.17987e-18Sigma
: The Stefan-Boltzmann radiation constant – 5.6704e-08
Other additions to Ruby
The embedded version of Ruby, mruby, does not have a regular expression engine. We have added one, but it is not based on standard Ruby regular expressions, but on the ones from Qt. For most regular expressions, this should not matter, however.
Running QSoas
QSoas can also be useful when run from the command-line.
Command-line options
When starting QSoas from a terminal, you can use a number of command-line option to change its behaviour. Here are the most useful:
-
-run
command will run the command command after QSoas starts up.-
-exit-after-running
will run the commands specified by--run
, and then exit the program. This can be used to run scripts to automatically process data without user interaction.-
-no-startup-files
disables the loading of startup scripts.-
-stdout
makes the text written to the QSoas terminal also appear in the standard output (i.e. the terminal from which you started QSoas).-
-load-stack
file loads the given file as a stack file just after QSoas starts up.
Non-interactive running of QSoas
It is possible to run QSoas completely non-interactively. This can be useful for regenerating the results of fits, or massively subtracting baselines…
The simplest way to do so is to use the scripts/qs-run
script
included in the source code archive. Copy that script where you have
the QSoas command file you want to run, open an operating system
command-line terminal and run:
# ./qs-run my-command-script.txt
This file was written by Vincent Fourmond, and is copyright (c) 2012-2020 by CNRS/AMU.