BangDB Database represents the database within BangDB. The database contains rest of the entities within BangDB, for example: table, stream, ML etc. We need to create the database object to be able to do anything within the db.
C++
Java
To create a table, we use BangDBDatabase and call getTable() API for the same.
TableEnv is a type using which the user can describe the various details for the index that should be created. For more information please see TableEnv type. There are however, two helper APIs provided for simplicity, in a few cases we could simply use this.
For advanced setting, we should use the addIndex() API
The first one ( addIndex_str() ) creates index for a string / text column/field. The idx_size describes the max size if the index key. To create index for num or fixed size datatype, we can use addIndex_num() API. It return negative value for error, usually -1.
To Drop index we can simply call
intdropIndex(constchar*idxName);
It return negative value for error, usually -1. To check if an index is present
boolhasIndex(constchar*idxName);
To get table reference ( for already created or opened table)
TableEnv *getTableEnv();
User should delete the memory of returned table env by calling delete. It returns NULL for error.
To dump data on the disk
intdumpData();
It return negative value for error, usually -1. To get the name of the table
constchar*getName();
User should delete the memory of returned data by calling delete[]. It returns NULL for error.
This returns the full table path on the file system.
constchar*getTableDir();
User should delete the memory of returned data by calling delete[]. It returns NULL for error.
To get the index type of the table
IndexType getIndexType();
To get the table stats, the details of the table
constchar*getStats(bool verbose =true);
User should delete the memory of returned data by calling delete[]. To upload files in the table (for files - supported only for TableType = LARGE_TABLE).
It uses key to retrieve the file and stores the file with name fname in location fpath on the local system. These return negative value (like -1) for error.
To get list of all large data keys (todo: make it recursive, should have flag set in the json itself).
User should delete the memory of returned data by calling delete[]. It returns NULL for error.
To get the number of slices for a particular file.
intcountSliceLargeData(constchar*key);
Large files or large objects are kept in BangDB in slices. These slices are combined to return the data/file/object. This API returns the number of slices for any given file/object. It return negative value for error, usually -1.
To get count of files or large objects
longcountLargeData();
It return negative value for error, usually -1. To delete file/object from a Large table
intdelLargeData(constchar*key);
It return negative value for error, usually -1. To put the key and val for a normal table.
FDT is a helper type which allows us to deal with different data using the same interface. It mainly has two important parts. One is data (void*) and the other is length (length of the data). Users should use the constructor provided by the FDT to set data, else should ensure setting _fixed_sz_data properly. If the data is of fixed size then set it to 1 else 0. When we set data using a constructor then FDT does this automatically. See FDT for more info.
It return negative value for error, usually -1. To scan data between sk and ek, the two primary keys. This is for range scan using primary key. Note that this API should be used for NORMAL_TABLE. Since NORMAL_TABLE can't have indexes hence we can scan using the primary keys only.
It returns ResultSet, which allows the user to iterate through the returned key and values. See ResultSet type for more information. ScanFilter sets some of the elements for scanning, see ScanFilter for more information. It returns NULL for error.
If a transaction is enabled and wishes to put the operation within it then we should pass the transaction object reference, else it should be NULL. See Transaction for more details. This API returns -1 for error.
To delete data defined by key
longdel(FDT *key, Transaction *txn =NULL);
It returns -1 for error. To insert text data
longputText(constchar*text,int textlen, FDT *k =NULL, InsertOptions flag = INSERT_UNIQUE);
It return negative value for error, usually -1. To scan text data
This is for solely reverse indexing based scan. The wlist is array of all the tokens to search for and intersect is true then it works as AND else OR. It returns ResultSet for success or NULL for error.
To upload a document
longputDoc(constchar*doc, FDT *pk =NULL,constchar*rev_idx_fields_json =NULL, InsertOptions flag = INSERT_UNIQUE);
It return negative value for error, usually -1.
To scan for a document for given primary keys coupled with filter query. This is a powerful API which allows users to define the query and scan the table. The query could be absent, or simple or complex in nature.
This query is a json doc and can be simply written or for simplicity users may leverage DataQuery type to build the query. For more detail, it is highly recommended to go through the the DataQuery and also through the recommended way to call and use scan.
It returns negative value or -1 for error. To get expected count between two keys. Note this is just indicative and should not be taken as an exact count.
longexpCount(FDT *skey, FDT *ekey);
It returns negative value or -1 for error. To get count for number of keys in the table
longcount();
It returns negative value or -1 for error. To enable auto commit for single operations. Usually it's always ON and if WAL is selected (which is default) then it is ON always.
voidsetAutoCommit(bool flag);
To get the type of the table
TableType getTableType();
Returns true if this table is same as the given table
This closes the table and returns 0 for success and -1 for error.
To add index for a table
publicintaddIndex(String idxName,TableEnv tenv)
This is the generic API for adding index for a table. It returns 0 for success and -1 for error. TableEnv is a type using which the user can describe the various details for the table that should be created. For more information please see TableEnv type. There are however, two helper APIs provided for simplicity, in a few cases we could simply use this.
For advanced setting, we should use the addIndex() API
The first one ( addIndex_str() ) creates index for a string / text column/field. The idx_size describes the max size if the index key. To create index for num or fixed size datatype, we can use addIndex_num() API.
To drop index we can simply call
publicintdropIndex(String idxName)
This will drop the index and It returns -1 for error and 0 for success.
To check if an index is present
publicbooleanhasIndex(String idxName)
This returns boolean if the given index is defined for the table or not.
To dump data on the disk
publicintdumpData()
This dumps the data for the table which forces all data for the table to be written on the filesystem. It returns -1 for error and 0 for success.
To get the name of the table
publicStringgetName()
This returns the full table path on the file system.
publicStringgetTableDir()
This returns the full table path on the file system, else returns null for error.
To get the index type of the table. Index type is an enum.
publicIndexTypegetIndexType()
To get the table stats, the details of the table
publicStringgetStats(boolean verbose)
This will return json string for table stats. Verbose will dictate the brevity of the response. For errors, it will return null.
It uses a key to retrieve the file and stores the file with the name fname in the location fpath on the local system. This return negative value ( like -1 ) for error.
To upload files in the table ( for files - supported only for TableType = LARGE_TABLE)
Key is typically file id (string only) and file_path is the actual location of the file on the server. This will return negative value for error.
To get all large data keys (todo: make it recursive, should have flag set in the json itself)
publicbyte[]getLargeData(String key)
This is only supported for tables of Large Table type. We can use this API to get large data from the table identified with the key. The data will be stored in buf and length of the data in len variable. For success it returns 0 else -1 for error.
Large files or large objects are kept in BangDB in slices. These slices are combined to return the data/file/object. This api returns the number of slices for any given file/object. it returns negative value for error usually -1.
To get the count of large data in the db
publiclongcountLargeData()
This returns negative value for error usually -1.
To delete file/object from a Large table
publicintdelLargeData(Stringkey)
This returns negative values for error usually -1.
This is used for Normal Table type. This scans the data between sk and ek, the two primary keys. Either or both of these primary keys could be null. It returns ResultSet, which allows the user to iterate through the returned key and values. See ResultSet type for more information. ScanFilter sets some of the elements for scanning, see ScanFilter for more information.
To scan for a document for given primary keys coupled with filter query. This is a powerful API which allows users to define the query and scan the table. The query could be absent, or simple or complex in nature. This query is a json doc and can be simply written or for simplicity users may leverage DataQuery type to build the query. For more detail, it is highly recommended to go through the DataQuery and also through the recommended way to call and use scan.
publicResultSetscanDoc(ResultSet prev_rs,String pk_skey,String pk_ekey,String idx_filter_json,ScanFilter sf
)
This is used for wide tables only. It returns ResultSet for success or NULL for error.
This could be used for any table except for large tables. Given a key, it will return value in val attribute. This returns 0 for success and -1 for error. If a transaction is enabled and wishes to put the operation within it then we should pass the transaction object reference, else it should be NULL. See Transaction for more details.
This API can count the number of documents, or rows with supplied filter query. This could also take primary index, secondary indexes and reversed index all together or as needed. It returns count if successful else -1 for error.
This API returns the expected count between two keys. Please note this is not the exact count but a rough measurement. If there are a large number of keys in the table and we wish to know a rough estimate of count, then this function can be very efficient and fast with very little overhead. Returns count if successful else -1 for error.
To get number of row in a table
publiclongcount()
It returns negative value for error usually -1.
To enable auto commit for single operations. Usually it's always ON and if WAL is selected (which is default) then it is ON always.
publicvoidsetAutoCommit(boolean flag)
Returns true if this table is same as the given table