"In principle, the Taylor series of a function of n variables involves an n-vector, an n × n matrix, an n × n × n tensor, and so on. Actual use of orders higher than two, however, is so rare that the manipulation of matrices is a hundred times better supported in our brains and in our software tools than that of tensors."
(N. Trefethen, Maxims about numerical mathematics, science, computers, and life on earth)
This is no longer the state of affairs, see Future Directions in Tensor-Based Computation and Modeling. Report from an NSF workshop Feb 2009
Linear algebra versus matrix computations
"Gene Golub often lamented that linear algebra, as taught in math departments, and CS237A, his famous course on numerical linear algebra, bore almost no relation to each other. One reason is that in math, linear algebra is regarded as a topic in algebra and is mostly about what could be deduced from the axiomatic definitions of fields and vector spaces. Notions like conditioning, least squares, norms, orthogonality, SVD, though central to numerical linear algebra, do not extend to arbitrary fields and are relegated to a secondary status.
Another difference, as Gene also liked to emphasize, is the pivotal role played by matrices. Many mathematicians prefer coordinate-free objects and regard matrices with disdain. But while matrices could represent linear operators with respect to some bases, they could also represent bilinear forms, order-2 tensors, graphs, metrics, correlations, hyperlink structures, DNA microarray measurements, movie ratings by viewer - many of which make little sense when viewed as an operator.
When one realizes that a matrix is not necessarily a coordinate
representation of a linear operator, and is content
with results valid only over the real and complex fields, linear
algebra becomes enormously more interesting. In similar
spirit, we will examine the prospects of a subject we
call "numerical multilinear algebra", which is to multilinear
algebra what numerical linear algebra is to linear algebra."