The algebraic model for representing text documents and objects as vectors of identifiers is called the vector space model. It is used in information filtering, indexing, relevancy rankings and information retrieval. It was first used in the SMART Information Retrieval System. When a document is represented as a vector, each dimension corresponds to a separate term. A term which occurs in the document has a value in the vector of non-zero. Other ways of computing these values, or weights, have been developed. The most popular is tf-idf weighting. Depending on the application, the definition of term varies. Single words, keywords and occasionally longer phrases are used for terms. The dimensionality of the vector, if words are used as terms, is the total number of words available for use. By using the assumptions of the document similarities theory, the relevancy rankings of documents in a keyword search can be worked out by comparing the deviation of angles between vectors both within the document and the original query where the vectors of both are the same type. The limitations of the vector space model are thus. Due to poor similarity values long documents are poorly represented. False positive matches may be returned if search keywords do not precisely match document terms. False negative matches could be returned when documents share a context but have different term vocabulary. Vector space representation results in the loss of the order which the terms are in the document.