Redis Modules Commands#

Accessing redis module commands requires the installation of the supported Redis module. For a quick start with redis modules, try the Redismod docker.

RedisBloom Commands#

These are the commands for interacting with the RedisBloom module. Below is a brief example, as well as documentation on the commands themselves.

Create and add to a bloom filter

import redis
r = redis.Redis()
r.bf().create("bloom", 0.01, 1000)
r.bf().add("bloom", "foo")

Create and add to a cuckoo filter

import redis
r = redis.Redis()
r.cf().create("cuckoo", 1000)
r.cf().add("cuckoo", "filter")

Create Count-Min Sketch and get information

import redis
r = redis.Redis()
r.cms().initbydim("dim", 1000, 5)
r.cms().incrby("dim", ["foo"], [5])
r.cms().info("dim")

Create a topk list, and access the results

import redis
r = redis.Redis()
r.topk().reserve("mytopk", 3, 50, 4, 0.9)
r.topk().info("mytopk")
class redis.commands.bf.commands.BFCommands[source]#

Bloom Filter commands.

add(key, item)[source]#

Add to a Bloom Filter key an item. For more information see BF.ADD.

card(key)[source]#

Returns the cardinality of a Bloom filter - number of items that were added to a Bloom filter and detected as unique (items that caused at least one bit to be set in at least one sub-filter). For more information see BF.CARD.

create(key, errorRate, capacity, expansion=None, noScale=None)[source]#

Create a new Bloom Filter key with desired probability of false positives errorRate expected entries to be inserted as capacity. Default expansion value is 2. By default, filter is auto-scaling. For more information see BF.RESERVE.

exists(key, item)[source]#

Check whether an item exists in Bloom Filter key. For more information see BF.EXISTS.

info(key)[source]#

Return capacity, size, number of filters, number of items inserted, and expansion rate. For more information see BF.INFO.

insert(key, items, capacity=None, error=None, noCreate=None, expansion=None, noScale=None)[source]#

Add to a Bloom Filter key multiple items.

If nocreate remain None and key does not exist, a new Bloom Filter key will be created with desired probability of false positives errorRate and expected entries to be inserted as size. For more information see BF.INSERT.

loadchunk(key, iter, data)[source]#

Restore a filter previously saved using SCANDUMP.

See the SCANDUMP command for example usage. This command will overwrite any bloom filter stored under key. Ensure that the bloom filter will not be modified between invocations. For more information see BF.LOADCHUNK.

madd(key, *items)[source]#

Add to a Bloom Filter key multiple items. For more information see BF.MADD.

mexists(key, *items)[source]#

Check whether items exist in Bloom Filter key. For more information see BF.MEXISTS.

reserve(key, errorRate, capacity, expansion=None, noScale=None)#

Create a new Bloom Filter key with desired probability of false positives errorRate expected entries to be inserted as capacity. Default expansion value is 2. By default, filter is auto-scaling. For more information see BF.RESERVE.

scandump(key, iter)[source]#

Begin an incremental save of the bloom filter key.

This is useful for large bloom filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see BF.SCANDUMP.

class redis.commands.bf.commands.CFCommands[source]#

Cuckoo Filter commands.

add(key, item)[source]#

Add an item to a Cuckoo Filter key. For more information see CF.ADD.

addnx(key, item)[source]#

Add an item to a Cuckoo Filter key only if item does not yet exist. Command might be slower that add. For more information see CF.ADDNX.

count(key, item)[source]#

Return the number of times an item may be in the key. For more information see CF.COUNT.

create(key, capacity, expansion=None, bucket_size=None, max_iterations=None)[source]#

Create a new Cuckoo Filter key an initial capacity items. For more information see CF.RESERVE.

delete(key, item)[source]#

Delete item from key. For more information see CF.DEL.

exists(key, item)[source]#

Check whether an item exists in Cuckoo Filter key. For more information see CF.EXISTS.

info(key)[source]#

Return size, number of buckets, number of filter, number of items inserted, number of items deleted, bucket size, expansion rate, and max iteration. For more information see CF.INFO.

insert(key, items, capacity=None, nocreate=None)[source]#

Add multiple items to a Cuckoo Filter key, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERT.

insertnx(key, items, capacity=None, nocreate=None)[source]#

Add multiple items to a Cuckoo Filter key only if they do not exist yet, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERTNX.

loadchunk(key, iter, data)[source]#

Restore a filter previously saved using SCANDUMP. See the SCANDUMP command for example usage.

This command will overwrite any Cuckoo filter stored under key. Ensure that the Cuckoo filter will not be modified between invocations. For more information see CF.LOADCHUNK.

mexists(key, *items)[source]#

Check whether an items exist in Cuckoo Filter key. For more information see CF.MEXISTS.

reserve(key, capacity, expansion=None, bucket_size=None, max_iterations=None)#

Create a new Cuckoo Filter key an initial capacity items. For more information see CF.RESERVE.

scandump(key, iter)[source]#

Begin an incremental save of the Cuckoo filter key.

This is useful for large Cuckoo filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see CF.SCANDUMP.

class redis.commands.bf.commands.CMSCommands[source]#

Count-Min Sketch Commands

incrby(key, items, increments)[source]#

Add/increase items to a Count-Min Sketch key by ‘’increments’’. Both items and increments are lists. For more information see CMS.INCRBY.

Example:

>>> cmsincrby('A', ['foo'], [1])
info(key)[source]#

Return width, depth and total count of the sketch. For more information see CMS.INFO.

initbydim(key, width, depth)[source]#

Initialize a Count-Min Sketch key to dimensions (width, depth) specified by user. For more information see CMS.INITBYDIM.

initbyprob(key, error, probability)[source]#

Initialize a Count-Min Sketch key to characteristics (error, probability) specified by user. For more information see CMS.INITBYPROB.

merge(destKey, numKeys, srcKeys, weights=[])[source]#

Merge numKeys of sketches into destKey. Sketches specified in srcKeys. All sketches must have identical width and depth. Weights can be used to multiply certain sketches. Default weight is 1. Both srcKeys and weights are lists. For more information see CMS.MERGE.

query(key, *items)[source]#

Return count for an item from key. Multiple items can be queried with one call. For more information see CMS.QUERY.

class redis.commands.bf.commands.TOPKCommands[source]#

TOP-k Filter commands.

add(key, *items)[source]#

Add one item or more to a Top-K Filter key. For more information see TOPK.ADD.

count(key, *items)[source]#

Return count for one item or more from key. For more information see TOPK.COUNT.

incrby(key, items, increments)[source]#

Add/increase items to a Top-K Sketch key by ‘’increments’’. Both items and increments are lists. For more information see TOPK.INCRBY.

Example:

>>> topkincrby('A', ['foo'], [1])
info(key)[source]#

Return k, width, depth and decay values of key. For more information see TOPK.INFO.

list(key, withcount=False)[source]#

Return full list of items in Top-K list of key. If withcount set to True, return full list of items with probabilistic count in Top-K list of key. For more information see TOPK.LIST.

query(key, *items)[source]#

Check whether one item or more is a Top-K item at key. For more information see TOPK.QUERY.

reserve(key, k, width, depth, decay)[source]#

Create a new Top-K Filter key with desired probability of false positives errorRate expected entries to be inserted as size. For more information see TOPK.RESERVE.


RedisGraph Commands#

These are the commands for interacting with the RedisGraph module. Below is a brief example, as well as documentation on the commands themselves.

Create a graph, adding two nodes

import redis
from redis.graph.node import Node

john = Node(label="person", properties={"name": "John Doe", "age": 33}
jane = Node(label="person", properties={"name": "Jane Doe", "age": 34}

r = redis.Redis()
graph = r.graph()
graph.add_node(john)
graph.add_node(jane)
graph.add_node(pat)
graph.commit()
class redis.commands.graph.node.Node(node_id=None, alias=None, label=None, properties=None)[source]#

A node within the graph.

class redis.commands.graph.edge.Edge(src_node, relation, dest_node, edge_id=None, properties=None)[source]#

An edge connecting two nodes.

class redis.commands.graph.commands.GraphCommands[source]#

RedisGraph Commands

bulk(**kwargs)[source]#

Internal only. Not supported.

commit()[source]#

Create entire graph.

config(name, value=None, set=False)[source]#

Retrieve or update a RedisGraph configuration. For more information see `https://redis.io/commands/graph.config-get/>`_. # noqa

Args:

namestr

The name of the configuration

value :

The value we want to set (can be used only when set is on)

setbool

Turn on to set a configuration. Default behavior is get.

delete()[source]#

Deletes graph. For more information see DELETE. # noqa

execution_plan(query, params=None)[source]#

Get the execution plan for given query, GRAPH.EXPLAIN returns an array of operations.

Args:

query: the query that will be executed params: query parameters

explain(query, params=None)[source]#

Get the execution plan for given query, GRAPH.EXPLAIN returns ExecutionPlan object. For more information see GRAPH.EXPLAIN. # noqa

Args:

query: the query that will be executed params: query parameters

flush()[source]#

Commit the graph and reset the edges and the nodes to zero length.

list_keys()[source]#

Lists all graph keys in the keyspace. For more information see GRAPH.LIST. # noqa

merge(pattern)[source]#

Merge pattern.

profile(query)[source]#

Execute a query and produce an execution plan augmented with metrics for each operation’s execution. Return a string representation of a query execution plan, with details on results produced by and time spent in each operation. For more information see GRAPH.PROFILE. # noqa

query(q, params=None, timeout=None, read_only=False, profile=False)[source]#

Executes a query against the graph. For more information see GRAPH.QUERY. # noqa

Args:

qstr

The query.

paramsdict

Query parameters.

timeoutint

Maximum runtime for read queries in milliseconds.

read_onlybool

Executes a readonly query if set to True.

profilebool

Return details on results produced by and time spent in each operation.

slowlog()[source]#

Get a list containing up to 10 of the slowest queries issued against the given graph ID. For more information see GRAPH.SLOWLOG. # noqa

Each item in the list has the following structure: 1. A unix timestamp at which the log entry was processed. 2. The issued command. 3. The issued query. 4. The amount of time needed for its execution, in milliseconds.


RedisJSON Commands#

These are the commands for interacting with the RedisJSON module. Below is a brief example, as well as documentation on the commands themselves.

Create a json object

import redis
r = redis.Redis()
r.json().set("mykey", ".", {"hello": "world", "i am": ["a", "json", "object!"]})

Examples of how to combine search and json can be found here.

class redis.commands.json.commands.JSONCommands[source]#

json commands.

arrappend(name, path='.', *args)[source]#

Append the objects args to the array under the path` in key ``name.

For more information see JSON.ARRAPPEND..

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

  • args (List[Union[str, int, float, bool, None, Dict[str, Any], List[Any]]]) –

Return type

List[Optional[int]]

arrindex(name, path, scalar, start=None, stop=None)[source]#

Return the index of scalar in the JSON array under path at key name.

The search can be limited using the optional inclusive start and exclusive stop indices.

For more information see JSON.ARRINDEX.

Parameters
  • name (str) –

  • path (str) –

  • scalar (int) –

  • start (Optional[int], default: None) –

  • stop (Optional[int], default: None) –

Return type

List[Optional[int]]

arrinsert(name, path, index, *args)[source]#

Insert the objects args to the array at index index under the path` in key ``name.

For more information see JSON.ARRINSERT.

Parameters
  • name (str) –

  • path (str) –

  • index (int) –

  • args (List[Union[str, int, float, bool, None, Dict[str, Any], List[Any]]]) –

Return type

List[Optional[int]]

arrlen(name, path='.')[source]#

Return the length of the array JSON value under path at key``name``.

For more information see JSON.ARRLEN.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

List[Optional[int]]

arrpop(name, path='.', index=-1)[source]#

Pop the element at index in the array JSON value under path at key name.

For more information see JSON.ARRPOP.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

  • index (Optional[int], default: -1) –

Return type

List[Optional[str]]

arrtrim(name, path, start, stop)[source]#

Trim the array JSON value under path at key name to the inclusive range given by start and stop.

For more information see JSON.ARRTRIM.

Parameters
  • name (str) –

  • path (str) –

  • start (int) –

  • stop (int) –

Return type

List[Optional[int]]

clear(name, path='.')[source]#

Empty arrays and objects (to have zero slots/keys without deleting the array/object).

Return the count of cleared paths (ignoring non-array and non-objects paths).

For more information see JSON.CLEAR.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

int

debug(subcommand, key=None, path='.')[source]#

Return the memory usage in bytes of a value under path from key name.

For more information see JSON.DEBUG.

Parameters
  • subcommand (str) –

  • key (Optional[str], default: None) –

  • path (Optional[str], default: '.') –

Return type

Union[int, List[str]]

delete(key, path='.')[source]#

Delete the JSON value stored at key key under path.

For more information see JSON.DEL.

Parameters
  • key (str) –

  • path (Optional[str], default: '.') –

Return type

int

forget(key, path='.')#

Delete the JSON value stored at key key under path.

For more information see JSON.DEL.

Parameters
  • key (str) –

  • path (Optional[str], default: '.') –

Return type

int

get(name, *args, no_escape=False)[source]#

Get the object stored as a JSON value at key name.

args is zero or more paths, and defaults to root path `no_escape is a boolean flag to add no_escape option to get non-ascii characters

For more information see JSON.GET.

Parameters
  • name (str) –

  • no_escape (Optional[bool], default: False) –

Return type

List[Union[str, int, float, bool, None, Dict[str, Any], List[Any]]]

merge(name, path, obj, decode_keys=False)[source]#

Merges a given JSON value into matching paths. Consequently, JSON values at matching paths are updated, deleted, or expanded with new children

decode_keys If set to True, the keys of obj will be decoded with utf-8.

For more information see JSON.MERGE.

Parameters
  • name (str) –

  • path (str) –

  • obj (Union[str, int, float, bool, None, Dict[str, Any], List[Any]]) –

  • decode_keys (Optional[bool], default: False) –

Return type

Optional[str]

mget(keys, path)[source]#

Get the objects stored as a JSON values under path. keys is a list of one or more keys.

For more information see JSON.MGET.

Parameters
  • keys (List[str]) –

  • path (str) –

Return type

List[Union[str, int, float, bool, None, Dict[str, Any], List[Any]]]

mset(triplets)[source]#

Set the JSON value at key name under the path to obj for one or more keys.

triplets is a list of one or more triplets of key, path, value.

For the purpose of using this within a pipeline, this command is also aliased to JSON.MSET.

For more information see JSON.MSET.

Parameters

triplets (List[Tuple[str, str, Union[str, int, float, bool, None, Dict[str, Any], List[Any]]]]) –

Return type

Optional[str]

numincrby(name, path, number)[source]#

Increment the numeric (integer or floating point) JSON value under path at key name by the provided number.

For more information see JSON.NUMINCRBY.

Parameters
  • name (str) –

  • path (str) –

  • number (int) –

Return type

str

nummultby(name, path, number)[source]#

Multiply the numeric (integer or floating point) JSON value under path at key name with the provided number.

For more information see JSON.NUMMULTBY.

Parameters
  • name (str) –

  • path (str) –

  • number (int) –

Return type

str

objkeys(name, path='.')[source]#

Return the key names in the dictionary JSON value under path at key name.

For more information see JSON.OBJKEYS.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

List[Optional[List[str]]]

objlen(name, path='.')[source]#

Return the length of the dictionary JSON value under path at key name.

For more information see JSON.OBJLEN.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

int

resp(name, path='.')[source]#

Return the JSON value under path at key name.

For more information see JSON.RESP.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

List

set(name, path, obj, nx=False, xx=False, decode_keys=False)[source]#

Set the JSON value at key name under the path to obj.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8.

For the purpose of using this within a pipeline, this command is also aliased to JSON.SET.

For more information see JSON.SET.

Parameters
  • name (str) –

  • path (str) –

  • obj (Union[str, int, float, bool, None, Dict[str, Any], List[Any]]) –

  • nx (Optional[bool], default: False) –

  • xx (Optional[bool], default: False) –

  • decode_keys (Optional[bool], default: False) –

Return type

Optional[str]

set_file(name, path, file_name, nx=False, xx=False, decode_keys=False)[source]#

Set the JSON value at key name under the path to the content of the json file file_name.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8.

Parameters
  • name (str) –

  • path (str) –

  • file_name (str) –

  • nx (Optional[bool], default: False) –

  • xx (Optional[bool], default: False) –

  • decode_keys (Optional[bool], default: False) –

Return type

Optional[str]

set_path(json_path, root_folder, nx=False, xx=False, decode_keys=False)[source]#

Iterate over root_folder and set each JSON file to a value under json_path with the file name as the key.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8.

Parameters
  • json_path (str) –

  • root_folder (str) –

  • nx (Optional[bool], default: False) –

  • xx (Optional[bool], default: False) –

  • decode_keys (Optional[bool], default: False) –

Return type

List[Dict[str, bool]]

strappend(name, value, path='.')[source]#

Append to the string JSON value. If two options are specified after the key name, the path is determined to be the first. If a single option is passed, then the root_path (i.e Path.root_path()) is used.

For more information see JSON.STRAPPEND.

Parameters
  • name (str) –

  • value (str) –

  • path (Optional[int], default: '.') –

Return type

Union[int, List[Optional[int]]]

strlen(name, path=None)[source]#

Return the length of the string JSON value under path at key name.

For more information see JSON.STRLEN.

Parameters
  • name (str) –

  • path (Optional[str], default: None) –

Return type

List[Optional[int]]

toggle(name, path='.')[source]#

Toggle boolean value under path at key name. returning the new value.

For more information see JSON.TOGGLE.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

Union[bool, List[Optional[int]]]

type(name, path='.')[source]#

Get the type of the JSON value under path from key name.

For more information see JSON.TYPE.

Parameters
  • name (str) –

  • path (Optional[str], default: '.') –

Return type

List[str]


RediSearch Commands#

These are the commands for interacting with the RediSearch module. Below is a brief example, as well as documentation on the commands themselves. In the example below, an index named my_index is being created. When an index name is not specified, an index named idx is created.

Create a search index, and display its information

import redis
from redis.commands.search.field import TextField

r = redis.Redis()
index_name = "my_index"
schema = (
    TextField("play", weight=5.0),
    TextField("ball"),
)
r.ft(index_name).create_index(schema)
print(r.ft(index_name).info())
class redis.commands.search.commands.SearchCommands[source]#

Search commands.

add_document(doc_id, nosave=False, score=1.0, payload=None, replace=False, partial=False, language=None, no_create=False, **fields)[source]#

Add a single document to the index.

### Parameters

  • doc_id: the id of the saved document.

  • nosave: if set to true, we just index the document, and don’t

    save a copy of it. This means that searches will just return ids.

  • score: the document ranking, between 0.0 and 1.0

  • payload: optional inner-index payload we can save for fast

i access in scoring functions - replace: if True, and the document already is in the index, we perform an update and reindex the document - partial: if True, the fields specified will be added to the

existing document. This has the added benefit that any fields specified with no_index will not be reindexed again. Implies replace

  • language: Specify the language used for document tokenization.

  • no_create: if True, the document is only updated and reindexed

    if it already exists. If the document does not exist, an error will be returned. Implies replace

  • fields kwargs dictionary of the document fields to be saved

    and/or indexed.

    NOTE: Geo points shoule be encoded as strings of “lon,lat”

Parameters
  • doc_id (str) –

  • nosave (bool, default: False) –

  • score (float, default: 1.0) –

  • payload (Optional[bool], default: None) –

  • replace (bool, default: False) –

  • partial (bool, default: False) –

  • language (Optional[str], default: None) –

  • no_create (str, default: False) –

  • fields (List[str]) –

add_document_hash(doc_id, score=1.0, language=None, replace=False)[source]#

Add a hash document to the index.

### Parameters

  • doc_id: the document’s id. This has to be an existing HASH key

    in Redis that will hold the fields the index needs.

  • score: the document ranking, between 0.0 and 1.0

  • replace: if True, and the document already is in the index, we

    perform an update and reindex the document

  • language: Specify the language used for document tokenization.

aggregate(query, query_params=None)[source]#

Issue an aggregation query.

### Parameters

query: This can be either an AggregateRequest, or a Cursor

An AggregateResult object is returned. You can access the rows from its rows property, which will always yield the rows of the result.

For more information see FT.AGGREGATE.

Parameters
  • query (Union[str, Query]) –

  • query_params (Optional[Dict[str, Union[str, int, float]]], default: None) –

aliasadd(alias)[source]#

Alias a search index - will fail if alias already exists

### Parameters

  • alias: Name of the alias to create

For more information see FT.ALIASADD.

Parameters

alias (str) –

aliasdel(alias)[source]#

Removes an alias to a search index

### Parameters

  • alias: Name of the alias to delete

For more information see FT.ALIASDEL.

Parameters

alias (str) –

aliasupdate(alias)[source]#

Updates an alias - will fail if alias does not already exist

### Parameters

  • alias: Name of the alias to create

For more information see FT.ALIASUPDATE.

Parameters

alias (str) –

alter_schema_add(fields)[source]#

Alter the existing search index by adding new fields. The index must already exist.

### Parameters:

  • fields: a list of Field objects to add for the index

For more information see FT.ALTER.

Parameters

fields (List[str]) –

batch_indexer(chunk_size=100)[source]#

Create a new batch indexer from the client with a given chunk size

config_get(option)[source]#

Get runtime configuration option value.

### Parameters

  • option: the name of the configuration option.

For more information see FT.CONFIG GET.

Parameters

option (str) –

Return type

str

config_set(option, value)[source]#

Set runtime configuration option.

### Parameters

  • option: the name of the configuration option.

  • value: a value for the configuration option.

For more information see FT.CONFIG SET.

Parameters
  • option (str) –

  • value (str) –

Return type

bool

create_index(fields, no_term_offsets=False, no_field_flags=False, stopwords=None, definition=None, max_text_fields=False, temporary=None, no_highlight=False, no_term_frequencies=False, skip_initial_scan=False)[source]#

Create the search index. The index must not already exist.

### Parameters:

  • fields: a list of TextField or NumericField objects

  • no_term_offsets: If true, we will not save term offsets in

the index - no_field_flags: If true, we will not save field flags that allow searching in specific fields - stopwords: If not None, we create the index with this custom stopword list. The list can be empty - max_text_fields: If true, we will encode indexes as if there were more than 32 text fields which allows you to add additional fields (beyond 32). - temporary: Create a lightweight temporary index which will expire after the specified period of inactivity (in seconds). The internal idle timer is reset whenever the index is searched or added to. - no_highlight: If true, disabling highlighting support. Also implied by no_term_offsets. - no_term_frequencies: If true, we avoid saving the term frequencies in the index. - skip_initial_scan: If true, we do not scan and index.

For more information see FT.CREATE.

delete_document(doc_id, conn=None, delete_actual_document=False)[source]#

Delete a document from index Returns 1 if the document was deleted, 0 if not

### Parameters

  • delete_actual_document: if set to True, RediSearch also delete

    the actual document if it is in the index

dict_add(name, *terms)[source]#

Adds terms to a dictionary.

### Parameters

  • name: Dictionary name.

  • terms: List of items for adding to the dictionary.

For more information see FT.DICTADD.

Parameters
  • name (str) –

  • terms (List[str]) –

dict_del(name, *terms)[source]#

Deletes terms from a dictionary.

### Parameters

  • name: Dictionary name.

  • terms: List of items for removing from the dictionary.

For more information see FT.DICTDEL.

Parameters
  • name (str) –

  • terms (List[str]) –

dict_dump(name)[source]#

Dumps all terms in the given dictionary.

### Parameters

  • name: Dictionary name.

For more information see FT.DICTDUMP.

Parameters

name (str) –

dropindex(delete_documents=False)[source]#

Drop the index if it exists. Replaced drop_index in RediSearch 2.0. Default behavior was changed to not delete the indexed documents.

### Parameters:

  • delete_documents: If True, all documents will be deleted.

For more information see FT.DROPINDEX.

Parameters

delete_documents (bool, default: False) –

explain(query, query_params=None)[source]#

Returns the execution plan for a complex query.

For more information see FT.EXPLAIN.

Parameters
  • query (Union[str, Query]) –

  • query_params (Optional[Dict[str, Union[str, int, float]]], default: None) –

get(*ids)[source]#

Returns the full contents of multiple documents.

### Parameters

  • ids: the ids of the saved documents.

info()[source]#

Get info an stats about the the current index, including the number of documents, memory consumption, etc

For more information see FT.INFO.

load_document(id)[source]#

Load a single document by id

profile(query, limited=False, query_params=None)[source]#

Performs a search or aggregate command and collects performance information.

### Parameters

query: This can be either an AggregateRequest, Query or string. limited: If set to True, removes details of reader iterator. query_params: Define one or more value parameters. Each parameter has a name and a value.

Parameters
  • query (Union[str, Query, AggregateRequest]) –

  • limited (bool, default: False) –

  • query_params (Optional[Dict[str, Union[str, int, float]]], default: None) –

search(query, query_params=None)[source]#

Search the index for a given query, and return a result of documents

### Parameters

  • query: the search query. Either a text for simple queries with

    default parameters, or a Query object for complex queries. See RediSearch’s documentation on query format

For more information see FT.SEARCH.

Parameters
  • query (Union[str, Query]) –

  • query_params (Optional[Dict[str, Union[str, int, float, bytes]]], default: None) –

spellcheck(query, distance=None, include=None, exclude=None)[source]#

Issue a spellcheck query

### Parameters

query: search query. distance*: the maximal Levenshtein distance for spelling

suggestions (default: 1, max: 4).

include: specifies an inclusion custom dictionary. exclude: specifies an exclusion custom dictionary.

For more information see FT.SPELLCHECK.

sugadd(key, *suggestions, **kwargs)[source]#

Add suggestion terms to the AutoCompleter engine. Each suggestion has a score and string. If kwargs[“increment”] is true and the terms are already in the server’s dictionary, we increment their scores.

For more information see FT.SUGADD.

sugdel(key, string)[source]#

Delete a string from the AutoCompleter index. Returns 1 if the string was found and deleted, 0 otherwise.

For more information see FT.SUGDEL.

Parameters
  • key (str) –

  • string (str) –

Return type

int

sugget(key, prefix, fuzzy=False, num=10, with_scores=False, with_payloads=False)[source]#

Get a list of suggestions from the AutoCompleter, for a given prefix.

Parameters:

prefixstr

The prefix we are searching. Must be valid ascii or utf-8

fuzzybool

If set to true, the prefix search is done in fuzzy mode. NOTE: Running fuzzy searches on short (<3 letters) prefixes can be very slow, and even scan the entire index.

with_scoresbool

If set to true, we also return the (refactored) score of each suggestion. This is normally not needed, and is NOT the original score inserted into the index.

with_payloadsbool

Return suggestion payloads

numint

The maximum number of results we return. Note that we might return less. The algorithm trims irrelevant suggestions.

Returns:

list:

A list of Suggestion objects. If with_scores was False, the score of all suggestions is 1.

For more information see FT.SUGGET.

Parameters
  • key (str) –

  • prefix (str) –

  • fuzzy (bool, default: False) –

  • num (int, default: 10) –

  • with_scores (bool, default: False) –

  • with_payloads (bool, default: False) –

Return type

List[SuggestionParser]

suglen(key)[source]#

Return the number of entries in the AutoCompleter index.

For more information see FT.SUGLEN.

Parameters

key (str) –

Return type

int

syndump()[source]#

Dumps the contents of a synonym group.

The command is used to dump the synonyms data structure. Returns a list of synonym terms and their synonym group ids.

For more information see FT.SYNDUMP.

synupdate(groupid, skipinitial=False, *terms)[source]#

Updates a synonym group. The command is used to create or update a synonym group with additional terms. Only documents which were indexed after the update will be affected.

Parameters:

groupid :

Synonym group id.

skipinitialbool

If set to true, we do not scan and index.

terms :

The terms.

For more information see FT.SYNUPDATE.

Parameters
  • groupid (str) –

  • skipinitial (bool, default: False) –

  • terms (List[str]) –

tagvals(tagfield)[source]#

Return a list of all possible tag values

### Parameters

  • tagfield: Tag field name

For more information see FT.TAGVALS.

Parameters

tagfield (str) –


RedisTimeSeries Commands#

These are the commands for interacting with the RedisTimeSeries module. Below is a brief example, as well as documentation on the commands themselves.

Create a timeseries object with 5 second retention

import redis
r = redis.Redis()
r.ts().create(2, retention_msecs=5000)
class redis.commands.timeseries.commands.TimeSeriesCommands[source]#

RedisTimeSeries Commands.

add(key, timestamp, value, retention_msecs=None, uncompressed=False, labels=None, chunk_size=None, duplicate_policy=None)[source]#

Append (or create and append) a new sample to a time series.

Args:

key:

time-series key

timestamp:

Timestamp of the sample. * can be used for automatic timestamp (using the system clock).

value:

Numeric data value of the sample

retention_msecs:

Maximum retention period, compared to maximal existing timestamp (in milliseconds). If None or 0 is passed then the series is not trimmed at all.

uncompressed:

Changes data storage from compressed (by default) to uncompressed

labels:

Set of label-value pairs that represent metadata labels of the key.

chunk_size:

Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

duplicate_policy:

Policy for handling multiple samples with identical timestamps. Can be one of: - ‘block’: an error will occur for any out of order sample. - ‘first’: ignore the new value. - ‘last’: override with latest value. - ‘min’: only override if the value is lower than the existing value. - ‘max’: only override if the value is higher than the existing value.

For more information: https://redis.io/commands/ts.add/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • timestamp (Union[int, str]) –

  • value (Union[int, float]) –

  • retention_msecs (Optional[int], default: None) –

  • uncompressed (Optional[bool], default: False) –

  • labels (Optional[Dict[str, str]], default: None) –

  • chunk_size (Optional[int], default: None) –

  • duplicate_policy (Optional[str], default: None) –

alter(key, retention_msecs=None, labels=None, chunk_size=None, duplicate_policy=None)[source]#

Update the retention, chunk size, duplicate policy, and labels of an existing time series.

Args:

key:

time-series key

retention_msecs:

Maximum retention period, compared to maximal existing timestamp (in milliseconds). If None or 0 is passed then the series is not trimmed at all.

labels:

Set of label-value pairs that represent metadata labels of the key.

chunk_size:

Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

duplicate_policy:

Policy for handling multiple samples with identical timestamps. Can be one of: - ‘block’: an error will occur for any out of order sample. - ‘first’: ignore the new value. - ‘last’: override with latest value. - ‘min’: only override if the value is lower than the existing value. - ‘max’: only override if the value is higher than the existing value.

For more information: https://redis.io/commands/ts.alter/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • retention_msecs (Optional[int], default: None) –

  • labels (Optional[Dict[str, str]], default: None) –

  • chunk_size (Optional[int], default: None) –

  • duplicate_policy (Optional[str], default: None) –

create(key, retention_msecs=None, uncompressed=False, labels=None, chunk_size=None, duplicate_policy=None)[source]#

Create a new time-series.

Args:

key:

time-series key

retention_msecs:

Maximum age for samples compared to highest reported timestamp (in milliseconds). If None or 0 is passed then the series is not trimmed at all.

uncompressed:

Changes data storage from compressed (by default) to uncompressed

labels:

Set of label-value pairs that represent metadata labels of the key.

chunk_size:

Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

duplicate_policy:

Policy for handling multiple samples with identical timestamps. Can be one of: - ‘block’: an error will occur for any out of order sample. - ‘first’: ignore the new value. - ‘last’: override with latest value. - ‘min’: only override if the value is lower than the existing value. - ‘max’: only override if the value is higher than the existing value.

For more information: https://redis.io/commands/ts.create/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • retention_msecs (Optional[int], default: None) –

  • uncompressed (Optional[bool], default: False) –

  • labels (Optional[Dict[str, str]], default: None) –

  • chunk_size (Optional[int], default: None) –

  • duplicate_policy (Optional[str], default: None) –

createrule(source_key, dest_key, aggregation_type, bucket_size_msec, align_timestamp=None)[source]#

Create a compaction rule from values added to source_key into dest_key.

Args:

source_key:

Key name for source time series

dest_key:

Key name for destination (compacted) time series

aggregation_type:

Aggregation type: One of the following: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa]

bucket_size_msec:

Duration of each bucket, in milliseconds

align_timestamp:

Assure that there is a bucket that starts at exactly align_timestamp and align all other buckets accordingly.

For more information: https://redis.io/commands/ts.createrule/

Parameters
  • source_key (Union[bytes, str, memoryview]) –

  • dest_key (Union[bytes, str, memoryview]) –

  • aggregation_type (str) –

  • bucket_size_msec (int) –

  • align_timestamp (Optional[int], default: None) –

decrby(key, value, timestamp=None, retention_msecs=None, uncompressed=False, labels=None, chunk_size=None)[source]#

Decrement (or create an time-series and decrement) the latest sample’s of a series. This command can be used as a counter or gauge that automatically gets history as a time series.

Args:

key:

time-series key

value:

Numeric data value of the sample

timestamp:

Timestamp of the sample. * can be used for automatic timestamp (using the system clock).

retention_msecs:

Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.

uncompressed:

Changes data storage from compressed (by default) to uncompressed

labels:

Set of label-value pairs that represent metadata labels of the key.

chunk_size:

Memory size, in bytes, allocated for each data chunk.

For more information: https://redis.io/commands/ts.decrby/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • value (Union[int, float]) –

  • timestamp (Union[int, str, None], default: None) –

  • retention_msecs (Optional[int], default: None) –

  • uncompressed (Optional[bool], default: False) –

  • labels (Optional[Dict[str, str]], default: None) –

  • chunk_size (Optional[int], default: None) –

delete(key, from_time, to_time)[source]#

Delete all samples between two timestamps for a given time series.

Args:

key:

time-series key.

from_time:

Start timestamp for the range deletion.

to_time:

End timestamp for the range deletion.

For more information: https://redis.io/commands/ts.del/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • from_time (int) –

  • to_time (int) –

deleterule(source_key, dest_key)[source]#

Delete a compaction rule from source_key to dest_key..

For more information: https://redis.io/commands/ts.deleterule/

Parameters
  • source_key (Union[bytes, str, memoryview]) –

  • dest_key (Union[bytes, str, memoryview]) –

get(key, latest=False)[source]#

# noqa Get the last sample of key. latest used when a time series is a compaction, reports the compacted value of the latest (possibly partial) bucket

For more information: https://redis.io/commands/ts.get/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • latest (Optional[bool], default: False) –

incrby(key, value, timestamp=None, retention_msecs=None, uncompressed=False, labels=None, chunk_size=None)[source]#

Increment (or create an time-series and increment) the latest sample’s of a series. This command can be used as a counter or gauge that automatically gets history as a time series.

Args:

key:

time-series key

value:

Numeric data value of the sample

timestamp:

Timestamp of the sample. * can be used for automatic timestamp (using the system clock).

retention_msecs:

Maximum age for samples compared to last event time (in milliseconds). If None or 0 is passed then the series is not trimmed at all.

uncompressed:

Changes data storage from compressed (by default) to uncompressed

labels:

Set of label-value pairs that represent metadata labels of the key.

chunk_size:

Memory size, in bytes, allocated for each data chunk.

For more information: https://redis.io/commands/ts.incrby/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • value (Union[int, float]) –

  • timestamp (Union[int, str, None], default: None) –

  • retention_msecs (Optional[int], default: None) –

  • uncompressed (Optional[bool], default: False) –

  • labels (Optional[Dict[str, str]], default: None) –

  • chunk_size (Optional[int], default: None) –

info(key)[source]#

# noqa Get information of key.

For more information: https://redis.io/commands/ts.info/

Parameters

key (Union[bytes, str, memoryview]) –

madd(ktv_tuples)[source]#

Append (or create and append) a new value to series key with timestamp. Expects a list of tuples as (key,`timestamp`, value). Return value is an array with timestamps of insertions.

For more information: https://redis.io/commands/ts.madd/

Parameters

ktv_tuples (List[Tuple[Union[bytes, str, memoryview], Union[int, str], Union[int, float]]]) –

mget(filters, with_labels=False, select_labels=None, latest=False)[source]#

# noqa Get the last samples matching the specific filter.

Args:

filters:

Filter to match the time-series labels.

with_labels:

Include in the reply all label-value pairs representing metadata labels of the time series.

select_labels:

Include in the reply only a subset of the key-value pair labels of a series.

latest:

Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket

For more information: https://redis.io/commands/ts.mget/

Parameters
  • filters (List[str]) –

  • with_labels (Optional[bool], default: False) –

  • select_labels (Optional[List[str]], default: None) –

  • latest (Optional[bool], default: False) –

mrange(from_time, to_time, filters, count=None, aggregation_type=None, bucket_size_msec=0, with_labels=False, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, groupby=None, reduce=None, select_labels=None, align=None, latest=False, bucket_timestamp=None, empty=False)[source]#

Query a range across multiple time-series by filters in forward direction.

Args:

from_time:

Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

to_time:

End timestamp for range query, + can be used to express the maximum possible timestamp.

filters:

filter to match the time-series labels.

count:

Limits the number of returned samples.

aggregation_type:

Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa]

bucket_size_msec:

Time bucket for aggregation in milliseconds.

with_labels:

Include in the reply all label-value pairs representing metadata labels of the time series.

filter_by_ts:

List of timestamps to filter the result by specific timestamps.

filter_by_min_value:

Filter result by minimum value (must mention also filter_by_max_value).

filter_by_max_value:

Filter result by maximum value (must mention also filter_by_min_value).

groupby:

Grouping by fields the results (must mention also reduce).

reduce:

Applying reducer functions on each group. Can be one of [avg sum, min, max, range, count, std.p, std.s, var.p, var.s].

select_labels:

Include in the reply only a subset of the key-value pair labels of a series.

align:

Timestamp for alignment control for aggregation.

latest:

Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket

bucket_timestamp:

Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

empty:

Reports aggregations for empty buckets.

For more information: https://redis.io/commands/ts.mrange/

Parameters
  • from_time (Union[int, str]) –

  • to_time (Union[int, str]) –

  • filters (List[str]) –

  • count (Optional[int], default: None) –

  • aggregation_type (Optional[str], default: None) –

  • bucket_size_msec (Optional[int], default: 0) –

  • with_labels (Optional[bool], default: False) –

  • filter_by_ts (Optional[List[int]], default: None) –

  • filter_by_min_value (Optional[int], default: None) –

  • filter_by_max_value (Optional[int], default: None) –

  • groupby (Optional[str], default: None) –

  • reduce (Optional[str], default: None) –

  • select_labels (Optional[List[str]], default: None) –

  • align (Union[int, str, None], default: None) –

  • latest (Optional[bool], default: False) –

  • bucket_timestamp (Optional[str], default: None) –

  • empty (Optional[bool], default: False) –

mrevrange(from_time, to_time, filters, count=None, aggregation_type=None, bucket_size_msec=0, with_labels=False, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, groupby=None, reduce=None, select_labels=None, align=None, latest=False, bucket_timestamp=None, empty=False)[source]#

Query a range across multiple time-series by filters in reverse direction.

Args:

from_time:

Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

to_time:

End timestamp for range query, + can be used to express the maximum possible timestamp.

filters:

Filter to match the time-series labels.

count:

Limits the number of returned samples.

aggregation_type:

Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa]

bucket_size_msec:

Time bucket for aggregation in milliseconds.

with_labels:

Include in the reply all label-value pairs representing metadata labels of the time series.

filter_by_ts:

List of timestamps to filter the result by specific timestamps.

filter_by_min_value:

Filter result by minimum value (must mention also filter_by_max_value).

filter_by_max_value:

Filter result by maximum value (must mention also filter_by_min_value).

groupby:

Grouping by fields the results (must mention also reduce).

reduce:

Applying reducer functions on each group. Can be one of [avg sum, min, max, range, count, std.p, std.s, var.p, var.s].

select_labels:

Include in the reply only a subset of the key-value pair labels of a series.

align:

Timestamp for alignment control for aggregation.

latest:

Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket

bucket_timestamp:

Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

empty:

Reports aggregations for empty buckets.

For more information: https://redis.io/commands/ts.mrevrange/

Parameters
  • from_time (Union[int, str]) –

  • to_time (Union[int, str]) –

  • filters (List[str]) –

  • count (Optional[int], default: None) –

  • aggregation_type (Optional[str], default: None) –

  • bucket_size_msec (Optional[int], default: 0) –

  • with_labels (Optional[bool], default: False) –

  • filter_by_ts (Optional[List[int]], default: None) –

  • filter_by_min_value (Optional[int], default: None) –

  • filter_by_max_value (Optional[int], default: None) –

  • groupby (Optional[str], default: None) –

  • reduce (Optional[str], default: None) –

  • select_labels (Optional[List[str]], default: None) –

  • align (Union[int, str, None], default: None) –

  • latest (Optional[bool], default: False) –

  • bucket_timestamp (Optional[str], default: None) –

  • empty (Optional[bool], default: False) –

queryindex(filters)[source]#

# noqa Get all time series keys matching the filter list.

For more information: https://redis.io/commands/ts.queryindex/

Parameters

filters (List[str]) –

range(key, from_time, to_time, count=None, aggregation_type=None, bucket_size_msec=0, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, align=None, latest=False, bucket_timestamp=None, empty=False)[source]#

Query a range in forward direction for a specific time-serie.

Args:

key:

Key name for timeseries.

from_time:

Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

to_time:

End timestamp for range query, + can be used to express the maximum possible timestamp.

count:

Limits the number of returned samples.

aggregation_type:

Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa]

bucket_size_msec:

Time bucket for aggregation in milliseconds.

filter_by_ts:

List of timestamps to filter the result by specific timestamps.

filter_by_min_value:

Filter result by minimum value (must mention also filter by_max_value).

filter_by_max_value:

Filter result by maximum value (must mention also filter by_min_value).

align:

Timestamp for alignment control for aggregation.

latest:

Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket

bucket_timestamp:

Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

empty:

Reports aggregations for empty buckets.

For more information: https://redis.io/commands/ts.range/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • from_time (Union[int, str]) –

  • to_time (Union[int, str]) –

  • count (Optional[int], default: None) –

  • aggregation_type (Optional[str], default: None) –

  • bucket_size_msec (Optional[int], default: 0) –

  • filter_by_ts (Optional[List[int]], default: None) –

  • filter_by_min_value (Optional[int], default: None) –

  • filter_by_max_value (Optional[int], default: None) –

  • align (Union[int, str, None], default: None) –

  • latest (Optional[bool], default: False) –

  • bucket_timestamp (Optional[str], default: None) –

  • empty (Optional[bool], default: False) –

revrange(key, from_time, to_time, count=None, aggregation_type=None, bucket_size_msec=0, filter_by_ts=None, filter_by_min_value=None, filter_by_max_value=None, align=None, latest=False, bucket_timestamp=None, empty=False)[source]#

Query a range in reverse direction for a specific time-series.

Note: This command is only available since RedisTimeSeries >= v1.4

Args:

key:

Key name for timeseries.

from_time:

Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

to_time:

End timestamp for range query, + can be used to express the maximum possible timestamp.

count:

Limits the number of returned samples.

aggregation_type:

Optional aggregation type. Can be one of [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa]

bucket_size_msec:

Time bucket for aggregation in milliseconds.

filter_by_ts:

List of timestamps to filter the result by specific timestamps.

filter_by_min_value:

Filter result by minimum value (must mention also filter_by_max_value).

filter_by_max_value:

Filter result by maximum value (must mention also filter_by_min_value).

align:

Timestamp for alignment control for aggregation.

latest:

Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket

bucket_timestamp:

Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

empty:

Reports aggregations for empty buckets.

For more information: https://redis.io/commands/ts.revrange/

Parameters
  • key (Union[bytes, str, memoryview]) –

  • from_time (Union[int, str]) –

  • to_time (Union[int, str]) –

  • count (Optional[int], default: None) –

  • aggregation_type (Optional[str], default: None) –

  • bucket_size_msec (Optional[int], default: 0) –

  • filter_by_ts (Optional[List[int]], default: None) –

  • filter_by_min_value (Optional[int], default: None) –

  • filter_by_max_value (Optional[int], default: None) –

  • align (Union[int, str, None], default: None) –

  • latest (Optional[bool], default: False) –

  • bucket_timestamp (Optional[str], default: None) –

  • empty (Optional[bool], default: False) –