The paper is concerned with methods for computing the best low multilinear
rank approximation of large and sparse tensors. Krylov-type methods have
been used for this problem; here block versions are introduced. For the
computation of partial eigenvalue and singular value decompositions of
matrices the Krylov-Schur (restarted Arnoldi) method is used. We describe
a generalization of this method to tensors, for computing the best low
multilinear rank approximation of large and sparse tensors. In analogy to
the matrix case, the large tensor is only accessed in multiplications
between the tensor and blocks of vectors, thus avoiding excessive memory
usage. It is proved that, if the starting approximation is good enough,
then the tensor Krylov-Schur method is convergent. Numerical examples
are given for synthetic tensors and sparse tensors from applications,
which demonstrate that for most large problems the Krylov-Schur method
converges faster and more robustly than higher order orthogonal iteration.