(BOX) copied data from GDrive only shows up in Box mount after remounting

What is the problem you are having with rclone?

After copying files from Google Drive to Box via rclone, copied files show up in rclone mount only after remounting the chunker drive after copying is finished. Files are visible in the Box web interface from the start of the copying.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.1

  • os/version: debian 11.7 (64 bit)
  • os/kernel: 5.10.0-22-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.6
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount chunker: /path/to/mountonlocaldisk --config /path/to/rclone.conf -vv --use-mmap --vfs-cache-mode full --cache-dir=/path/to/cachefolderonlocaldisk --vfs-read-ahead 512M --buffer-size=2G --vfs-cache-max-size=2000G --tpslimit 10 --vfs-fast-fingerprint
rclone copy -P gdrive:/Media/"TEST FOLDER"/ chunker:/Data/Media/"TEST FOLDER"/ --transfers 5 --bwlimit 50M

#### The rclone config contents with secrets removed.

type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
token = REDACTED
team_drive =

type = box
token = REDACTED

type = crypt
remote = box:/Data/
password = REDACTED
password2 = REDACTED

type = chunker
remote = boxcrypt:
chunk_size = 4.800Gi

#### A log from the command with the `-vv` flag  


It takes up to 5 min for mount to reflect changes in remote. You do not have to re-mount. Give it few minutes and all will be appear.

BTW. Any reason you use these two flags in your mount command?

--vfs-read-ahead 512M --buffer-size=2G

Normally it is better to use default values and do not change them. Unless you have some special requirements.

default --buffer-size is 16M

Unless you have very specific workflow you use mount for, your values do not make sense and can be counterproductive and/or dangerous.

--buffer-size is used per open file. With your settings if some program opens 20 files rclone will have to use 40GB of RAM. And if you don't have it result will be even system crash:)

--vfs-read-ahead SizeSuffix  Extra read ahead over --buffer-size when using cache-mode full

With your values every file you open - even to read just one byte will result with rclone reading 2.5GB from remote......

I use these flags for streaming. Use them to preload files from remote to disk cache that are then just uploaded to player. Sometimes it happens that I start streaming from remote and a file download starts on my server at the same time. Since it's a gigabit server up/down, the download has priority and uses all available bandwidth, causing reading from remote to be slower. By having some buffer already downloaded, the servers uploads that to the player and slowly keeps filling buffer from remote until the simultaneous file download is finished.

You pre-load 2GB to memory not disk cache:) This is what --buffer-size sets.

so the --vfs-read-ahead flag is irrelevant then for my use case?

As per docs:

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

You can also read more about these two flags in this thread:

1 Like

OK, I'll play around with the mount commands and follow the docs.

Unrelated to the post topic, which you solved (thanks for that, I removed --dir-cache-time of 1000h and left it to default 5m), is there a use case for the box flag --box-upload-cutoff ?

I am currently uploading and chunking 30gb files. The multipart upload is running with default part size of 64Mi, chunker split size is set to 4.8GB.

I received an error: multipart upload failed to upload part: client connection refused. This was with a bandwith of 50 MB/s. I am now trying 30 MB/s, so far so good. Maybe my server needed more time to verify the MD5 hash and took longer than rclone was willing to wait?

For streaming I would use:

--buffer-size=0 --vfs-read-ahead 256M

You do not want to set --vfs-read-ahead massively large as it would work for streaming one video. But when for example you start watching film A, then change to B and finally decide to watch C rclone has 3x --vfs-read-ahead to finish. At some stage depending on your usage pattern you can choke all your connection.

What you want is jitter free watching experience so there is always some ahead being downloaded to prevent glitches when temporary network slowdowns happen. But not too much ahead to prevent situations when you download things not needed. When you set it to 2GB when you only click at the file rclone has to download this 2GB.

But of course you can experiment and find values the best for your specific usage.

1 Like

This flag tells rclone when to use multipart upload (for files larger than cut-off size). The default is 50MB. If you set it too low then uploads of small files can suffer as number of simultaneous uploads is limited by number of transfers - and this value is shared between multipart and individual files.

Box does throttle - but it is not based on bandwidth but rather rate of API calls - even though these two values can be related. You can exceed API calls limit on slow connection as well. It is more related to latency than pure speed. It is much better and precise to use tpslimit to prevent throttling.

Per box API limits doc:

General API calls

1000 API requests per minute, per user

so good start would be with:

--tpslimit 15 --tpslimit-burst 0

You still see errors - lower the value. Or try to increase when all ok.

Unfortunately precise value can be only found empirically. For example for Dropbox consensus is that 12 is optimal. For Box I do not think we know yet:)

1 Like

Great, thanks for the help. I'm monitoring the transfer with 30MB/s bandwith limit, no issues so far. Will test the tpslimit and burst flags with the next transfer or if the current one messes up.

New and exciting times :slight_smile: Thanks again. Have a lovely day and weekend.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.