|Scilab Bag Of Tricks: The Scilab-2.5 IAQ (Infrequently Asked Questions)|
|Prev||Chapter 4. Unknown Spots||Next|
Functions are Scilab's the main abstraction feature, thus they deserve a closer look.
The "Introduction to Scilab", SCI/doc/Intro.ps, solely explains functions that have one or more arguments, returning one or more values. If only one value is returned the square brackets in the function definition are optional. Therefore the function head
function [y] = foo(x)
can be abbreviated to
function y = foo(x)
However, this is 100% pure syntactic sugar. What is much more important – and a valuable feature – is the possibility of defining a function that returns nothing as
function ext_print(x) printf("%f, %g", x, x)
does. In Fortran parlance ext_print would be called a SUBROUTINE, whereas Ada programmers would term it a PROCEDURE.
Of similar importance is the definition of parameterless functions.
function t = hires_timer() cps = 166e6 t = rdtsc() / cps
The parentheses after the function name are optional when defining the function, but not when calling it. For further information about the omission of parenthesis when calling a function, see the section called Omitting Parentheses on Function Call.
If we want to write bulletproof Scilab functions, we have to take care that our functions get the right number of arguments which are furthermore of the correct type, and correct dimension. This is due to Scilab's dynamic nature allowing us to pass arguments of different types, dimension etc. to a function.
Example 4-1. Function cat
function [res] = cat(macname) // Print definition of function 'macname' // if it has been loaded via a library. [nl, nr] = argn(0); if nr ~= 1 then error("Call with: cat(macro_name)"); end if type(macname) ~= 10 then error("Expecting a string, got a " .. + typeof(macname)); end if size(macname, "*") ~= 1 then sz = size(macname); error("Expecting a scalar, got a " .. + sz(1) + "x" + sz(2) + " matrix") end [res, err] = evstr(macname); if err ~= 0 then select err case 4 then disp(macname + " is undefined."); return; case 25 then disp(macname + " is a builtin function"); return; else error("unexpected error", err); end // select err end // err ~= 0 ...
Ensuring the correct number of input arguments always is the first step. Otherwise we cannot assume whether even accessing a parameter is valid. The number of output values is not as critical, for calling a function with less output variables than specified in the function's signature causes the extra output values to be silently discarded.
After learning the number of actual rhs-parameters, we immediately check whether it is in the right range. In our example simply terminates with an error if the number of arguments is incorrect.
It is conceivable that we convert from one type to another, say from numeric to string. Furthermore, it is possible that the type of the arguments determines the algorithm chosen, a feature normally advertised under the name "function overloading".
At first glance all this checking gizmos might seem exaggerated. To do it justice we should keep in mind that it is only necessary if a function must work reliably in different environments. All functions that a library exports belong to that class, because the library writer does not know how the functions will be used. Quick-and-dirty functions are a different thing, so are functions that are never called interactively.
Functions are a data type on their own right; therefore they themselves can be arguments to other functions, and can be elements in lists.
-->deff('y = fun(x)', 'if x > 0, y = sin(x); else, y = 1; end') -->fun(%pi/2) ans = 1. -->fun(-3) ans = - 1. -->bar=fun bar = [x]=bar(y) -->typeof(bar) ans = function -->deff('a = fun(u, v, w)', 'a = u^2 + v^2 + 2*u*v - w^2') Warning :redefining function: fun -->bar(%pi/4)^2 ans = 0.5 -->fun(2, 3, 4) ans = 9.
As the example shows Scilab employs its usual copy-by-value semantics when assigning function-variables, consistent with the assignment of variables of any other data type.
Function definitions can be nested. The usual scoping rules apply. Online nested function definitions are some kind of awkward because of the massive number of quotes, but deffs in functions are easy to the eye.
Example 4-2. Function tauc
function [t, rmin, r0] = tauc(E0, M, s, D) deff('U = Umorse(r, steepness, depth)', .. 'e = exp(-r * steepness); .. U = depth*(e^2 - 2*e)'); // point of vanishing potential deff('y = equ0(x)', 'y = Umorse(x, s, D)'); // reflection point deff('y = equ1(x)', 'y = Umorse(x, s, D) - E0'); deff('tau = integrand(x)', .. 'tau = sqrt( M / (2*(E0 - Umorse(x, s, D))) )'); // rationalized units... units = 10.0e-10 / sqrt(1.380662e-23 / 1.6605655e-27); // calculate endpoints of definite integral r0 = fsolve(-10.0, equ0); rmin = fsolve(-10.0, equ1); // evaluate definite integral [t_unscaled, err] = intg(rmin, r0, integrand); t = 2 * units * t_unscaled;
As mentioned above, user-defined functions can be passed as parameters to (usually different) functions. Builtin functions have to be "wrapped" in user-defined functions before they can be used as parameters.
The following example defines a functional that implements a property of Dirac's delta distribution.
-->deff('y = delta(a, foo)', 'y = foo(a)') -->delta(cos) !--error 25 bad call to primitive :cos -->deff('y = mycos(x)', 'y = cos(x)') -->delta(0, mycos) ans = 1.
The next example is a bit more convoluted, but also closer to the real world. We define a new optimizer function, called minimize, which is based on Scilab's optim function. minimize expects two vectors of data points xdata and ydata, a vector of initial parameters p_ini, the function to be minimized func, and an objective functional obj.
The advantage of defining separate model and objective functions is an increased flexibility as both can be replaced at will without changing the core minimization function minimize.
Example 4-3. Function minimize
function [f, p_opt, g_opt] = minimize(xdata, ydata, .. p_ini, func, obj) // on-the-fly definition of the objective function deff('[f, g, ind] = _cost(p_vec, ind)', .. '[f_val, f_grad] = func(xdata, p_vec); .. [f, g] = obj(f_val - ydata, f_grad)'); [f, p_opt, g_opt] = optim(_cost, p_ini);
The function minimize needs a model function func that returns the value and the gradient at all points x for a given vector of parameters p_vec. Moreover we need the objective functional obj that gives the "cost" and the direction of steepest descent in parameter space.
In this example we choose a quadratic polynomial for the model, my_model and least squares for the objective lsq.
function [f, g] = my_model(x, p) g = [ones(x), x, x.*x]; f = p(1) + x.*(p(2) + x*p(3)); function [f, g] = lsq(diff, grad) f = 0.5 * norm(diff)^2; g = grad' * diff;
Given these definitions, we can call minimize:
dx = [0.0 1.0 2.0 2.5 3.0]'; dy = [0.0 0.9 4.1 6.1 9.5]'; p_ini = [0.1 -0.2 0.9]'; [f_fin, p_fin, p_fingrad] = .. minimize(dx, dy, p_ini, my_model, lsq) xbasc(); plot2d(dx, dy, -1); // plot data points ... xv = linspace(dx(1), dx($), 50)'; yv = my_model(xv, p_fin); plot2d(xv, yv, 1, "000"); // ... and optimized model function
Currently the only complex data structure that allows for storage of functions is the typed list tlist.
FIXME: write it
FIXME: write it