I am having difficulty using the move command. I think I am confused about syntax. I've read some examples and some other posts to no avail.
Specifically, I am getting an "error reading destination directory:access denied" error.
Interestingly, I am able to move files when I have the remote drive VFS mounted. So I know that I have access. That's why I think I am just using the move command incorrectly.
What is your rclone version (output from rclone version)
V1.55.1
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 10 64-bits
Which cloud storage system are you using? (eg Google Drive)
Wasabi
The command you were trying to run (eg rclone copy /tmp remote:tmp)
without a debug log and the exact command, no idea what is really going on
can you post the rclone.log for rclone ls <remote name>:<dir on remote> --max-depth=1 --dump=bodies --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log
Thanks for your response. Here is what shows up in the log:
...
2021/07/03 12:22:54 DEBUG : Using config file from "C:\Users\bbymi\.config\rclone\rclone.conf"
2021/07/03 12:22:54 DEBUG : rclone: Version "v1.55.1" starting with parameters ["C:\rclone\rclone.exe" "ls" "jesseremote:jessedir" "--max-depth=1" "--dump=bodies" "--retries=1" "--low-level-retries=1" "--log-level=DEBUG" "--log-file=rclone.log"]
2021/07/03 12:22:54 DEBUG : Creating backend with remote "jesseremote:jessedir"
2021/07/03 12:22:54 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2021/07/03 12:22:54 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/07/03 12:22:54 DEBUG : HTTP REQUEST (req 0xc000555f00)
2021/07/03 12:22:54 DEBUG : GET /jessedir?delimiter=%2F&encoding-type=url&max-keys=1000&prefix= HTTP/1.1
Host: s3.us-east-2.wasabisys.com
User-Agent: rclone/v1.55.1
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210703T162254Z
Accept-Encoding: gzip
PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.jessedirjessedir.s3.wasabisys.comFBA7500851D74967lan22oTAB9rO+W8UbQeLUcvPjDmTa+CvOBjqEaYGA2wH2btxnCE87kKX2uYKWC4YhyOC+Y08BO6G
0
It does. It looks to me like Wasabi's us-east-2 endpoint is referring me to s3.wasabisys.com.
us-east-2 is what I have in my configuration file and it is where my bucket is located.
Yes, I am already using the endpoint that Wasabi told me to use. s3.us-east-2.wasabisys.com, which is what I have in my config file.
Whatever that referral is, it is automatic.
I am using an existing bucket. The directory on the remote is already existing.
Are you saying that I should pull the region setting out of the config file? The wizard put it there. I'll pull it out and see how that goes.
Oh boy! I'm confused.
The endpoint in my config should have my bucket name in it, or should it just say "endpoint = s3.us-east-2.wasabisys.com"?
It looks like the system is currently putting the directory name in automatically at the end of the domain:
Referrer: https://s3.us-east-2.wasabisys.com/jessedir...
Is there a distinction between my directory and my bucket name. jessedir is a directory that exists within, let's call it, jessebucket. Somehow, I've been thinking that a remote corresponds to a bucket, and that the bucket was basically just a root directory.
My apologies for all of the questions. I am a super-newb when it comes to S3 compliant storage and Rclone.
that is not correct for s3.
there is no way to specify a bucket when creating a s3 remote.
by default, wasabi creates a root user and its corresponding id/key and has total permission over every bucket and file.