Dropbox: IO error: MoveDir failed: from_lookup/not_found/ when try to move (copy works)

What is the problem you are having with rclone?

When I try to copy a folder including files, I get the following error.

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-131-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount \
  --user-agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' \
  --config=/mnt/local/rclone/config/rclone.conf \
  --allow-other \
  --allow-non-empty \
  --buffer-size 256M \
  --dir-cache-time 87600h \
  --poll-interval 1m \
  --timeout 15s \
  --vfs-cache-mode full \
  --vfs-cache-max-age 60s \
  --vfs-write-back 30s \
  --cache-dir /mnt/local/rclone/cache \
  --attr-timeout 1s \
  --umask=002 \
  --use-mmap \
  --log-file /var/log/rclone/media.log \
  --tpslimit 12 \
  --tpslimit-burst 1 \
  --transfers 32 \
  --dropbox-batch-mode sync \
  -v \
  dbx_fbx_storage_combined: /mnt/cloud

The rclone config contents with secrets removed.

[dbx_fbx_storage_combined]
type = combine
upstreams = Folder1=dbx_fbx_storage_a_crypt: Folder2=dbx_fbx_storage_b_crypt: Folder3=dbx_fbx_storage_c_crypt: Folder4=dbx_fbx_storage_d_crypt: Folder5=dbx_fbx_storage_e_crypt: Folder6=dbx_fbx_storage_f_crypt: Folder7=dbx_fbx_storage_g_crypt: Folder8=dbx_fbx_storage_h_crypt: Folder9=dbx_fbx_storage_i_crypt:

[dbx_fbx_storage_a_base]
type = dropbox
client_id = redacted
client_secret = redacted
token = redacted

[dbx_fbx_storage_a_crypt]
type = crypt
remote = dbx_fbx_storage_a_base:
password = redacted
password2 = redacted

[..]

A log from the command with the -vv flag

2022/11/14 14:10:54 NOTICE: Encrypted drive 'dbx_fbx_storage_d_crypt:': srcU.f=Encrypted drive 'dbx_fbx_storage_c_crypt:', srcURemote="Test-Folder", dstURemote=Test-Folder"
2022/11/14 14:10:54 ERROR : Folder4/Test-Folder: Dir.Rename error: MoveDir failed: from_lookup/not_found/..
2022/11/14 14:10:54 ERROR : IO error: MoveDir failed: from_lookup/not_found/..

That's a very long mount that seems to contradict itself.

This generally is bad and allows for over mounting. Any reason you added it?

You have a large buffer size but then use

which is for lower memory usage.

You have 12 but only burst to 1 so effectively making is super slow as it'll only do 1 tps.

Why 32? Lots of small files?

Why set this to 15?

That looks left over from Google Drive? Any reason you set this?

What's the reason for all the items here? Is that to combine various differently encrypted locations?

The log file is missing any debugging info.

Can you run the mount with -vv and explain the exact command you run to generate the error along with the complete debug log?

Thanks @Animosity022 for the detailed analyse.

Correct, we have a separate mount for each project client and its files and combine them. Each mount is encrypted. However, the encryptions on all the mounts are identical to each other. It just saves me from having to start multiple mounts or work via rc.

The problem occurs with an "mv" which affects the combined mount. I can't provide a debug log right now, because it would endanger the production operation to restart the mount.

Thanks, I used to need it. Legacy in the meantime.

"use-nmap" was not known to me, is also not in the official documentation of the mount (rclone mount)

Then I misunderstood the documentation at that point. Thanks for the hint!

There can be a lot of small files, but there can also be a lot of big ones. I had to find a good balance so that it works in both cases. So far also no problems.

Correctly guessed, legacy from Google Drive times. Were with the company previously at Google Workspace. Is probably not necessary, but also does not hurt anyone.

However, the configuration has nothing to do with the actual problem (just the moving of files does not work on a combined mount). I think I have already found the cause... and that is the use of Combine and the moving of files beyond a mount, which is merged via Combine. I think this is an edge case that is not currently provided for in rclone.

But thank you so much for your time and support!

Why use a flag you don't know what it is? It's on the global flags section as it's not mount specific -> Global Flags

You can mount it somewhere else and just repeat and not touch your production mount.

mv what? a file? a directory?

I can try to repeat it, but without knowing your exact flow, it makes it super tough to test out.

There is a possibly relevant thread about this on the dropbox forum.

Rclone is calling the move_v2 API call

And that error means

not_found Void There is nothing at the given path.

Can you show the mv command which gave the error? Were you trying to move something outside that dropbox?

Can you replicate this on the underlying dropbox only (without the combine)?

Thanks @ncw.

It doesn't matter if I copy a folder or a single file, the error always occurs when it affects another remote that was merged via Combine. Files can be moved within a remote. I have just tested this again for safety.

My guess: The respective remote is isolated for itself and cannot access the other one and therefore cannot move the files. So I would probably have to change the scope to solve the problem. That would at least explain the Dropbox Error... or do you have another idea? At the end you could also make it a copy in the background when moving files if it goes beyond a remote. Or?

I think you are probably right.

When you do a Move rclone will use the credentials on the destination drive to move the file/dir from the source drive.

If the user of the destination drive does not have permission to do this then it will fail in some fashion.

If you arrange for the users of the drives to have permissions on the other drives then it should work.

If we can definitively identify that the move has failed the rclone can fallback to copying manually by streaming the file.

I had a go at this here - can you give it a try?

v1.61.0-beta.6567.4660f5f15.fix-dropbox-cross-account-move on branch fix-dropbox-cross-account-move (uploaded in 15-30 mins)

This is what I wrote in the commit.

We have the identical problem with Google drive, and I've avoided putting a commit in like that because it gives a permission denied error and I wasn't confident we could tell it apart from any other. But I realised just now that we can tell if we are doing a cross remote transfer and only engage the fallback then. @Animosity022 should I add that to the google drive backend too do you think?

If we are doing a cross remote transfer then attempting a
Move/Copy/DirMove where we don't have permission gives
`from_lookup/not_found` errors.

This patch notices that error and only if we are doing a cross remote
transfer it engages the fallback where the file is streamed.

Now I had time to test (v1.61.0-beta.6567.4660f5f15.fix-dropbox-cross-account-move) the whole thing. I created a file and moved it before uploading. The upload was canceled, the file was moved by "mv" and also uploaded at the new location. But at the old place the file remained.

2022/11/18 10:42:47 ERROR : Folder1/test.txt: Failed to copy: context canceled
2022/11/18 10:42:47 INFO  : Folder1/test.txt: vfs cache: upload canceled
2022/11/18 10:42:47 INFO  : Folder1/test.txt: vfs cache: renamed in cache to "Folder2/test.txt"
2022/11/18 10:42:58 INFO  : vfs cache: cleaned: objects 1 (was 1) in use 1, to upload 1, uploading 0, total size 7 (was 7)
2022/11/18 10:43:22 INFO  : Folder2/test.txt: Copied (new)
2022/11/18 10:43:22 INFO  : Folder2/test.txt: vfs cache: upload succeeded try #2
2022/11/18 10:43:58 INFO  : Folder2/test.txt: vfs cache: removed cache file as Removing old cache file not in use
2022/11/18 10:43:58 INFO  : vfs cache RemoveNotInUse (maxAge=60000000000, emptyOnly=false): item Folder2/test.txt was removed, freed 7 bytes
2022/11/18 10:43:58 INFO  : vfs cache: cleaned: objects 0 (was 1) in use 0, to upload 0, uploading 0, total size 0 (was 7)
2022/11/18 10:44:58 INFO  : vfs cache: cleaned: objects 1 (was 1) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Otherwise it seems to work @ncw

Edit: I just did a test with a file that is already uploaded, it works. It seems only with not uploaded files to the fact that this then remains and is not deleted.

2022/11/18 10:47:34 INFO  : Folder2/test.txt: Copied (new) to: Folder1/test.txt
2022/11/18 10:47:34 INFO  : Folder2/test.txt: Deleted
2022/11/18 10:47:58 INFO  : vfs cache: cleaned: objects 1 (was 1) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Edit 2:

Single files are probably not a problem but whole folders probably cause a problem.

2022/11/18 13:59:45 NOTICE: Encrypted drive 'dbx_fbx_storage_i_crypt:': srcU.f=Encrypted drive 'dbx_fbx_storage_c_crypt:', srcURemote="Example-1", dstURemote="Example-1"
2022/11/18 13:59:46 ERROR : Folder1/Example-1: Dir.Rename error: can't move directory - incompatible remotes
2022/11/18 13:59:46 ERROR : IO error: can't move directory - incompatible remotes

This is probably a VFS bug :frowning:

Can you make a reproducer for me for this? So a sequence of shell commands I can run to see the problem?

Thanks

Hi @ncw,

I really only copy files using "mv ".

The only difference is:

  • once a folder with several files
  • once a file
  • once a file created and copied directly before it was uploaded