BRS supports large files or objects, hence we can upload or put files of any type and any size (within practical limits of the machine). Therefore, this is a suitable place for ML related files as they may run in several MBs or GBs.

For efficiency purposes, BangDB stores files by breaking it into several chunks (1MB chunk) and when user requests the file it sends all the necessary chunks to the user, combining which the desired file becomes available. If you wish to use this API then you will need to ensure certain things and follow the workflow. If you use CLI or dashboard, then they handle the workflow for you implicitly.

Let's see how it works.

For uploading files

Method : POST

URI : /brs/<bucket_name>/putfile

Body : encoded data (base64 encoding)

Workflow to upload a file name "aloi" which is ~17MB in size, into a bucket "mybucket".

Step 1

Input a file, read the size of the file and form the json request below

   "access_key": "brs_access_key",
   "secret_key": "brs_secret_key",
   "fsize": 17254154,
   "bucket_name": "mybucket",
   "key": "aloi"

Call the POST API /brs/mybucket/putfile with above json as body.

It will send following response


   "next_key": "aloi:0:17:17254154:1641803297543410",
   "nslice": 17,
   "chunk_size": 1048576,
   "file_size": 17254154,
   "next_id": 0

It says that db will store the file in 17 slices ("nslice") and each slice's size would be 1MB ("chunk_size"). It also tells the next key that should be used for the 1MB chunk of the file.

Step 2

Repeat "nslice" times

  • Read chunk of 1MB (or remaining bytes) of the file from the offset (beg = 0 (initial set)).
  • Encode the read chunk with base64.
  • Call the API /brs/mybucket/putfile?key=<next_key>&nslice=<nslice> with body as chunk.
  • Read response.
  • {
       "next_key": "aloi:1",
       "next_id": 1

    As you see, it sends the nex_key again and we should use this for next loop.

    • If error - then quit.
    • If next_key is not available in the response then it's done.
    • Set beg = beg + 1MB.