BangDB Database represents the database within BangDB. The database contains rest of the entities within BangDB, for example: table, stream, ML etc. We need to create the database object to be able to do anything within the db.

C++
Java

To create a table, we use BangDBDatabase and call getTable() API for the same.

To close a table

int closeTable(ClosedType tblCloseType = DEFAULT_AT_CLIENT);

It return negative value for error, usually -1.

To add index for a table

int addIndex(const char *idxName, TableEnv *tenv);

TableEnv is a type using which the user can describe the various details for the index that should be created. For more information please see TableEnv type. There are however, two helper APIs provided for simplicity, in a few cases we could simply use this.

For advanced setting, we should use the addIndex() API

int addIndex_str(const char *idxName, int idx_size, bool allowDuplicates);
int addIndex_num(const char *idxName, bool allowDuplicates);

The first one ( addIndex_str() ) creates index for a string / text column/field. The idx_size describes the max size if the index key. To create index for num or fixed size datatype, we can use addIndex_num() API. It return negative value for error, usually -1.

To Drop index we can simply call

int dropIndex(const char *idxName);

It return negative value for error, usually -1. To check if an index is present

bool hasIndex(const char *idxName);

To get table reference ( for already created or opened table)

TableEnv *getTableEnv();

User should delete the memory of returned table env by calling delete. It returns NULL for error.

To dump data on the disk

int dumpData();

It return negative value for error, usually -1. To get the name of the table

const char *getName();

User should delete the memory of returned data by calling delete[]. It returns NULL for error.

This returns the full table path on the file system.

const char *getTableDir();

User should delete the memory of returned data by calling delete[]. It returns NULL for error.

To get the index type of the table

IndexType getIndexType();

To get the table stats, the details of the table

const char *getStats(bool verbose = true);

User should delete the memory of returned data by calling delete[]. To upload files in the table (for files - supported only for TableType = LARGE_TABLE).

long putFile(const char *key, const char *file_path, InsertOptions iop);
long putLargeData(const char *key, void *val, long vallen, InsertOptions iop);

Key is typically file id (string only) and file_path is the actual location of the file on the server. These return negative value for error.

To download file from table to local system

long getFile(const char *key, const char *fname, const char *fpath);
long getLargeData(const char *key, void **buf, long *vallen);

It uses key to retrieve the file and stores the file with name fname in location fpath on the local system. These return negative value (like -1) for error.

To get list of all large data keys (todo: make it recursive, should have flag set in the json itself).

char *listLargeDataKeys(const char *key, int list_size_mb = 0);

User should delete the memory of returned data by calling delete[]. It returns NULL for error.

To get the number of slices for a particular file.

int countSliceLargeData(const char *key);

Large files or large objects are kept in BangDB in slices. These slices are combined to return the data/file/object. This API returns the number of slices for any given file/object. It return negative value for error, usually -1.

To get count of files or large objects

long countLargeData();

It return negative value for error, usually -1. To delete file/object from a Large table

int delLargeData(const char *key);

It return negative value for error, usually -1. To put the key and val for a normal table.

long put(FDT *key, FDT *val, InsertOptions flag = INSERT_UNIQUE, Transaction *txn = NULL);

FDT is a helper type which allows us to deal with different data using the same interface. It mainly has two important parts. One is data (void*) and the other is length (length of the data). Users should use the constructor provided by the FDT to set data, else should ensure setting _fixed_sz_data properly. If the data is of fixed size then set it to 1 else 0. When we set data using a constructor then FDT does this automatically. See FDT for more info.

It return negative value for error, usually -1. To scan data between sk and ek, the two primary keys. This is for range scan using primary key. Note that this API should be used for NORMAL_TABLE. Since NORMAL_TABLE can't have indexes hence we can scan using the primary keys only.

ResultSet * scan(
   ResultSet * prev_rs,
   FDT * pk_skey = NULL, FDT * pk_ekey = NULL,
   ScanFilter * sf = NULL, Transaction * txn = NULL
);

It returns ResultSet, which allows the user to iterate through the returned key and values. See ResultSet type for more information. ScanFilter sets some of the elements for scanning, see ScanFilter for more information. It returns NULL for error.

To get data for any table except a large table

int get(FDT *key, FDT **val, Transaction *txn = NULL);

If a transaction is enabled and wishes to put the operation within it then we should pass the transaction object reference, else it should be NULL. See Transaction for more details. This API returns -1 for error.

To delete data defined by key

long del(FDT *key, Transaction *txn = NULL);

It returns -1 for error. To insert text data

long putText(const char *text, int textlen, FDT *k = NULL, InsertOptions flag = INSERT_UNIQUE);

It return negative value for error, usually -1. To scan text data

ResultSet *scanText(const char *wlist[], int nfilters, bool intersect = false);

This is for solely reverse indexing based scan. The wlist is array of all the tokens to search for and intersect is true then it works as AND else OR. It returns ResultSet for success or NULL for error.

To upload a document

long putDoc(const char *doc, FDT *pk = NULL, const char *rev_idx_fields_json = NULL, InsertOptions flag = INSERT_UNIQUE);

It return negative value for error, usually -1.

To scan for a document for given primary keys coupled with filter query. This is a powerful API which allows users to define the query and scan the table. The query could be absent, or simple or complex in nature.

This query is a json doc and can be simply written or for simplicity users may leverage DataQuery type to build the query. For more detail, it is highly recommended to go through the the DataQuery and also through the recommended way to call and use scan.

ResultSet * scanDoc(
   ResultSet * prev_rs,
   FDT * pk_skey = NULL,
   FDT * pk_ekey = NULL,
   const char * idx_filter_json = NULL,
   ScanFilter * sf = NULL
);

It returns Resultset for success or NULL for error To count number of keys for given query (idx_filter_json)

long count(FDT *pk_skey, FDT *pk_ekey, const char *idx_filter_json = NULL, ScanFilter *sf = NULL);

It returns negative value or -1 for error. To get expected count between two keys. Note this is just indicative and should not be taken as an exact count.

long expCount(FDT *skey, FDT *ekey);

It returns negative value or -1 for error. To get count for number of keys in the table

long count();

It returns negative value or -1 for error. To enable auto commit for single operations. Usually it's always ON and if WAL is selected (which is default) then it is ON always.

void setAutoCommit(bool flag);

To get the type of the table

TableType getTableType();

Returns true if this table is same as the given table

bool isSameAs(BangDBTable *tbl);

To create BangDBTable object

public BangDBTable()

To close a table

public int closeTable(CloseType closeType, boolean force)

This closes the table and returns 0 for success and -1 for error.

To add index for a table

public int addIndex(String idxName, TableEnv tenv)

This is the generic API for adding index for a table. It returns 0 for success and -1 for error. TableEnv is a type using which the user can describe the various details for the table that should be created. For more information please see TableEnv type. There are however, two helper APIs provided for simplicity, in a few cases we could simply use this.

For advanced setting, we should use the addIndex() API

public int addIndex_str(String idxName, int idx_size, boolean allowDuplicates)
public int addIndex_num(String idxName, boolean allowDuplicates)

The first one ( addIndex_str() ) creates index for a string / text column/field. The idx_size describes the max size if the index key. To create index for num or fixed size datatype, we can use addIndex_num() API.

To drop index we can simply call

public int dropIndex(String idxName)

This will drop the index and It returns -1 for error and 0 for success.

To check if an index is present

public boolean hasIndex(String idxName)

This returns boolean if the given index is defined for the table or not.

To dump data on the disk

public int dumpData()

This dumps the data for the table which forces all data for the table to be written on the filesystem. It returns -1 for error and 0 for success.

To get the name of the table

public String getName()

This returns the full table path on the file system.

public String getTableDir()

This returns the full table path on the file system, else returns null for error.

To get the index type of the table. Index type is an enum.

public IndexType getIndexType()

To get the table stats, the details of the table

public String getStats(boolean verbose)

This will return json string for table stats. Verbose will dictate the brevity of the response. For errors, it will return null.

To upload files in the table

public long putFile(String key, String file_path, InsertOptions iop)

key is typically file id (string only) and file_path is the actual location of the file on the server. This will return negative value for error.

To download file from table to local system

public long getFile(String key, String fname, String fpath)

It uses a key to retrieve the file and stores the file with the name fname in the location fpath on the local system. This return negative value ( like -1 ) for error.

To upload files in the table ( for files - supported only for TableType = LARGE_TABLE)

public long putLargeData(String key, byte[] val, InsertOptions iop)

Key is typically file id (string only) and file_path is the actual location of the file on the server. This will return negative value for error.

To get all large data keys (todo: make it recursive, should have flag set in the json itself)

public byte[] getLargeData(String key)

This is only supported for tables of Large Table type. We can use this API to get large data from the table identified with the key. The data will be stored in buf and length of the data in len variable. For success it returns 0 else -1 for error.

To get the list of large data keys

public String listLargeDataKeys(String skey, int list_size_mb)

It returns NULL for error.

To get the number of slices for a particular file

public int countSliceLargeData(String key)

Large files or large objects are kept in BangDB in slices. These slices are combined to return the data/file/object. This api returns the number of slices for any given file/object. it returns negative value for error usually -1.

To get the count of large data in the db

public long countLargeData()

This returns negative value for error usually -1.

To delete file/object from a Large table

public int delLargeData(Stringkey)

This returns negative values for error usually -1.

To put key and values into the table

public long put(String key, byte[] val, InsertOptions flag, Transaction txn) 
public long put(long key, byte[] val, InsertOptions flag, Transaction txn) 
public long put(String key, String val, InsertOptions flag, Transaction txn) 
public long put(long key, String val, InsertOptions flag, Transaction txn)

To Scan data based on the query and between skey and ekey, the two primary keys.

public ResultSet scan(
   ResultSet prev_rs,
   String pk_skey,
   String pk_ekey,
   ScanFilter sf, 
   Transaction txn
)

This is used for Normal Table type. This scans the data between sk and ek, the two primary keys. Either or both of these primary keys could be null. It returns ResultSet, which allows the user to iterate through the returned key and values. See ResultSet type for more information. ScanFilter sets some of the elements for scanning, see ScanFilter for more information.

To scan for a document for given primary keys coupled with filter query. This is a powerful API which allows users to define the query and scan the table. The query could be absent, or simple or complex in nature. This query is a json doc and can be simply written or for simplicity users may leverage DataQuery type to build the query. For more detail, it is highly recommended to go through the DataQuery and also through the recommended way to call and use scan.

public ResultSet scanDoc(
   ResultSet prev_rs, 
   String pk_skey, 
   String pk_ekey, 
   String idx_filter_json, 
   ScanFilter sf
)

This is used for wide tables only. It returns ResultSet for success or NULL for error.

To get data for a particular key

public byte[] get(String key, Transaction txn) 
public byte[] get(long key, Transaction txn)

This could be used for any table except for large tables. Given a key, it will return value in val attribute. This returns 0 for success and -1 for error. If a transaction is enabled and wishes to put the operation within it then we should pass the transaction object reference, else it should be NULL. See Transaction for more details.

To delete data for a particular key

public long del(String key, Transaction txn)
public long del(long key, Transaction txn)

This could be used for all table types. It deletes the data defined by key. It returns 0 for success else -1 for error.

To get number of events with condition

public long count(String pk_skey, String pk_ekey, String idx_filter_json, ScanFilter sf)
public long count(long pk_skey, long pk_ekey, String idx_filter_json, ScanFilter sf)

This API can count the number of documents, or rows with supplied filter query. This could also take primary index, secondary indexes and reversed index all together or as needed. It returns count if successful else -1 for error.

To get the count of documents or row

public long expCount(String skey, String ekey)
public long expCount(long skey, long ekey)

This API returns the expected count between two keys. Please note this is not the exact count but a rough measurement. If there are a large number of keys in the table and we wish to know a rough estimate of count, then this function can be very efficient and fast with very little overhead. Returns count if successful else -1 for error.

To get number of row in a table

public long count()

It returns negative value for error usually -1.

To enable auto commit for single operations. Usually it's always ON and if WAL is selected (which is default) then it is ON always.

public void setAutoCommit(boolean flag)

Returns true if this table is same as the given table

public boolean isSameAs(BangDBTable tbl)

To get the table type

public TableType getTableType()