An index handle uniquely identifies an index in the database. It is a string and
consists of the collection name and an index identifier separated by a /. The
index identifier part is a numeric value that is auto-generated by ArangoDB.
A specific index of a collection can be accessed using its index handle or
index identifier as follows:
For example: Assume that the index handle, which is stored in the _id
attribute of the index, is demo/362549736 and the index was created in a collection
named demo. Then this index can be accessed as:
db.demo.index("demo/362549736");
Because the index handle is unique within the database, you can leave out the
collection and use the shortcut:
db._index("demo/362549736");
An index may also be looked up by its name. Since names are only unique within
a collection, rather than within the database, the lookup must also include the
collection name.
ensures that an index exists
collection.ensureIndex(index-description)
Ensures that an index according to the index-description exists. A
new index will be created if none exists with the given description.
The index-description must contain at least a type attribute.
Other attributes may be necessary, depending on the index type.
type can be one of the following values:
persistent: persistent index
fulltext: fulltext index
geo: geo index, with one or two attributes
name can be a string. Index names are subject to the same character
restrictions as collection names. If omitted, a name will be auto-generated so
that it is unique with respect to the collection, e.g. idx_832910498.
sparse can be true or false.
For persistent the sparsity can be controlled, fulltext and geo
are sparse by definition.
unique can be true or false and is supported by persistent
Calling this method returns an index object. Whether or not the index
object existed before the call is indicated in the return attribute
isNewlyCreated.
deduplicate can be true or false and is supported by array indexes of
type persistent. It controls whether inserting duplicate index values
from the same document into a unique array index will lead to a unique constraint
error or not. The default value is true, so only a single instance of each
non-unique index value will be inserted into the index per document. Trying to
insert a value into the index that already exists in the index will always fail,
regardless of the value of this attribute.
estimates can be true or false and is supported by indexes of type
persistent. This attribute controls whether index selectivity estimates are
maintained for the index. Not maintaining index selectivity estimates can have
a slightly positive impact on write performance.
The downside of turning off index selectivity estimates will be that
the query optimizer will not be able to determine the usefulness of different
competing indexes in AQL queries when there are multiple candidate indexes to
choose from.
The estimates attribute is optional and defaults to true if not set. It will
have no effect on indexes other than persistent (with hash and skiplist
being mere aliases for persistent nowadays).
Dropping an index via a collection handlePermalink
drops an index
collection.dropIndex(index)
Drops the index. If the index does not exist, then false is
returned. If the index existed and was dropped, then true is
returned. Note that you cannot drop some special indexes (e.g. the primary
index of a collection or the edge index of an edge collection).
collection.dropIndex(index-handle)
Same as above. Instead of an index an index handle can be given.
Loads all indexes of this collection into Memory.
collection.loadIndexesIntoMemory()
This function tries to cache all index entries
of this collection into the main memory.
Therefore it iterates over all indexes of the collection
and stores the indexed values, not the entire document data,
in memory.
All lookups that could be found in the cache are much faster
than lookups not stored in the cache so you get a nice performance boost.
It is also guaranteed that the cache is consistent with the stored data.
This function honors memory limits. If the indexes you want to load are smaller
than your memory limit this function guarantees that most index values are
cached. If the index is larger than your memory limit this function will fill
up values up to this limit and for the time being there is no way to control
which indexes of the collection should have priority over others.