Mount SharePoint : progress / stats?

What is the problem you are having with rclone?

Lack of Progress / Statistics Reports for WebDav / SharePoint
that has no backend commands.

Run the command 'rclone version' and share the full output of the command.

rclone --version

rclone v1.57.0-DEV

  • os/version: rocky 8.10 (64 bit)
  • os/kernel: 4.18.0-553.33.1.el8_10.66.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.12
  • go/linking: dynamic
  • go/tags: none

This is the latest version in the Rocky RHEL8.10 distro.

Which cloud storage system are you using? (eg Google Drive)

SharePoint

The command you were trying to run (eg rclone copy /tmp remote:tmp)

 /usr/bin/rclone mount --daemon --log-file=/root/rclone.log --log-format=pid --vfs-cache-mode writes --vfs-write-wait 10s ${REMOTE}: ${URL}

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.


[$REMOTE]
type = webdav
url = https://${ORG}sharepoint.com/${URL_STEM}
vendor = sharepoint
user = $USER
pass = $PASS

A log from the command that you were trying to run with the -vv flag

No output written to log

I am trying to transfer 264 2GB files to SharePoint , which together make up
a set of Linux Volume Manager (LVM) Physical Volume (PV) files,
which comprise an LVM Volume Group (VG) that is 100% utilized by
an LVM Logical Volume (LV) , that is to contain a backup of an @ 500GB
XFS filesystem, such that one Linux host can mount these remote PVs in RW mode,
and other Linux hosts can mount it in RO mode.

I started off by creating about 80 of these PVs, initializing them to 2GiB (1<<31) of
zero bytes, mounting them on the loop device using 'losetup(8)', running 'pvcreate(8)' for each of them, adding them to a new Volume Group created with 'vgcreate(8)', (NOT using sharing / lvmlockd) , and creating
a new resizable Logical Volume using all of them, creating an XFS filesystem,
and dumping directories containing the first @ 150GB of directories to dump .

That was about 6 days ago.

Since then, NO files have appeared in SharePoint directory when viewed by a
browser, nothing has been written to the log file, and I have no idea how long
the transfer process will take.

I am trying to move this vast directory to cloud storage, the host has only
about 160GB free storage left, we don't need regular access to many of the files
it contains which date back over 10 years, but we don't want to delete them
altogether.

I can see rclone is writing to the SharePoint server :

$ netstat -nautp | grep rclone
tcp 0 0 ${HOST_IP}:${LOCAL_PORT} 13.107.136.10:443 ESTABLISHED 137412/rclone

$ tcpdump -n -v -tt -l -i any src ${HOST_IP} and dst 13.107.136.10
...
1736936538.836001 IP (tos 0x0, ttl 64, id 22606, offset 0, flags [DF], proto TCP (6), length 2920)
${HOST_IP}.${TMP_PORT} > 13.107.136.10.https: Flags [.], cksum 0x7f6a (incorrect -> 0x678c), seq 1667695:1670575, ack 14855, win 858, length 2880
1736936538.837107 IP (tos 0x0, ttl 64, id 22608, offset 0, flags [DF], proto TCP (6), length 2920)
${HOST_IP}.${TMP_PORT} > 13.107.136.10.https: Flags [.], cksum 0x7f6a (incorrect -> 0xfe9f), seq 1670575:1673455, ack 14855, win 858, length 2880
1736936538.838058 IP (tos 0x0, ttl 64, id 22610, offset 0, flags [DF], proto TCP (6), length 1480)
${HOST_IP}.${TMP_PORT} > 13.107.136.10.https: Flags [.], cksum 0x79ca (incorrect -> 0x79f9), seq 1673455:1674895, ack 14855, win 858, length 1440

On Sunday, I saw that 'netstat -nautp' reported NO open sockets for the rclone
daemon, so I stopped and restarted it, and transfers have since resumed.

So far, not one single 2GB PV file has appeared in SharePoint, since I started last
Thursday (today is Wednesday).

My plan is, once transfer has completed, and I can read back and checksum the remote files, and the checksums of all backed up files match, I can remove the
~/.cache/rclone/vfs/${REMOTE}/* files, remove the original backed up files,
freeing space, and go on to create the next 150 GB of PV files,
dump more of the filesystem, and since we are using '--vfs_cache_mode=writes',
only the new and first of the files will be cached, we won't have to hold a
complete replica of all the PV files in local cache, for which we have insufficient
storage to accommodate.

It would be great to get some indication of the progress of the copy of the
local cache files to the remote SharePoint directory, but I can't find any
rclone command to do this, beyond the backend commands, which do
not exist for SharePoint .

Please, could a future version of rclone have such a command, which, for
every file partially transferred, would print out the number of bytes transferred
and the number still to transfer ? This is a major inadequacy of rclone that
impacts its overall usefulness, IMHO. If I had known there were no status
or progress report commands or log messsages, I would have tried to use
a different tool.

Could anyone hazard a guess as to how long a transfer of 150GB should take,
given our 10MB/s internet connection ?
( This is obviously a log longer than the (( (150 x 10^9) / (10 x 10^6)) / 60)
== 250 minutes theoretical minimum, it has taken @ 6 days now ).

Any suggestions on how to get some idea of progress from rclone or SharePoint
would be most gratefully received.

Please consider adding a progress monitoring facility / commands / options to
rclone mount !

This is rclone version from 2021... so whatever your results it is only interesting for people studying history of IT.

Maybe we should create special forum category (rclone archaeology) for such enthusiasts. For most it is irrelevant what was or not broken many years ago.

Please keep such unhelpful comments to yourself.

This is the only help you can get at this stage. Install the latest rclone version and try again. Then if problem persist we will look into it.

This is not rclone problem. Linux distros often contain tones of outdaded software.

My advice is to forget your distro provided version and install one directly from Rclone downloads.

1.57 is latest rclone version provided by my Linux distro :

[root@$HOST ~]# repoquery rclone
Last metadata expiration check: 0:12:59 ago on Wed 15 Jan 2025 10:59:05 GMT.
rclone-0:1.57.0-1.el8.src
rclone-0:1.57.0-1.el8.x86_64

OK, I guess rclone is not the tool for me, since there
is evidently a major lack of understanding of the requirements of use cases
on the part of the developers and no online help forum .

Or does the latest version of rclone have mount transfer progress monitoring ?
Please let me know if so. Otherwise, I will have to terminate the rclone
transfer, and use a different method to transfer the PV files, rclone is
just not up to the task.

You can use rc interface to pull various stats including progress. You will find plenty of examples on this forum.

Rclone is a single binary generally and any distro package manager is maintained by the distro so they are frequently out of date. RClone has no control over that and doesn't manage nor maintain them.

Recommended install method is here: Install

I get that you didn't appreciate @kapitainsky attempt at some fun humor above as I am pretty confident he was joking as we get a lot of posts with folks with outdated versions and that's a particularly old one as RedHat keeps extremely old binaries. There's a quite active forum here and the developers are extremely responsive.

You can poll via many methods, remote control commands:
Remote Control / API

Prometheus stats as there are a few examples: Mount Monitoring with Prometheus and Grafana - Howto Guides - rclone forum

Searching the forums is a great way to get a few examples of what you are looking for.

SharePoint has some very bad throttling and based on the mount command, you are probably getting a lot of that. That's why as part of the help template, we ask for a debug log file of the issue you are having so we can look at the problem with the exact command being run along with a log file that makes answersing questions much better.

1 Like

Indeed. Any offence caused was not intentional but I was dead serious about refusing any help for people using such outdated version:) In 99% percent of cases it is pure laziness and remaining 1% usually has solution to use the latest version anyway, for example for old macOS.

In case of Linux I have not seen yet situation when it was impossible to run the latest rclone (and maybe only one with some ancient FreeBSD). Only for kernels older than v2.6.32 it would be a problem. Like @Animosity022 mentioned rclone is single binary which can be run from any location and does not have any dependencies - simply download it and use it.

Rclone is relatively fast paced project, and it makes v1.57 25 releases behind... every new release brings new features, bug fixes and enhancements. It makes attempts to troubleshoot such old versions not only difficult but simply pointless.

Linux is the worst offender here as except rolling releases (Suse Tumbleweed, Arch) most other distros treat rclone as something not important enough to pay any attention to. It is also lack of understanding of Linux distribution model where claims are often made how many thousands of packages are supported without mentioning that only small fraction of them will be kept up to date (e.g. web browser) during its lifecycle.

It is rather long post:) but hopefully clarifies my position on this topic.

When you assume intent, it becomes offensive. Many people do not want to use a non distro package on their distro and don't understand or misconstrue that rclone maintains those packages.

1 Like

For anybody on Linux fed up with outdated software in their distro I suggest to look at Homebrew on Linux.

It is probably better known by macOS users but it does the same magic on Linux (pretty much on any flavour including Red Hat and WSL).

Please note that Cloud storage API's are pretty dynamic. Microsoft in particular loves changing things around. A quick look into the changelog will show how many changes/enhancements and fixes have gone in since that version originally came out.

You can also use RC commands to turn on and off DEBUG logging on the mount and check what is going on, otherwise log will be pretty thin. Also note that SharePoint does not allow stream, so it means the backup file needs to be created and stored somewhere in your local system before upload can take place. Highly recommend you specify a path for it to be stored so you can handle the space accordingly. This is a limitation from Microsoft and not rclone as they require the exact size of the file before upload can take place. Also SharePoint has a limit filesize of 250GB, so you might need to either make smaller files or use the chunker backend to have rclone do it for you.

We are happy to help out, but please note you also need to be open to listen to suggestions and fixes and not only expect us to troubleshoot versions we stop using years ago and contain multiple issues that have been solved since.