Very low speed in Dropbox

@asdffdsa If you need to summon me, or want me to know about something, just use the mention @thestigma

This will send me a notification in the forum with a link to the place I was mentioned (just as you should have gotten one now)

Please use responsibly. (summoning of evil spirits of demon entities using this forum function not recommended).

thanks @thestigma,

thanks @thestigma and @asdffdsa for the explanations, wasabi backup is much faster with this script.

1 Like

sure, now that you have a basic script running, you had asked about ransomware protection.
here is a script snippet that can provide protection from ransomware.

rclone.exe sync "c:\thefolder" "wasabiwest01:thebucket\thefolder\backup" --backup-dir=wasabiwest01:thebucket\thefolder\archive\20190922.102642\

for each local file that has been modfied, the corresponding file in the cloud folder 'backup' will be moved to
thebucket\thefolder\archive\20190922.102642
and after that move, rclone will upload the modified local file to the thebucket\thefolder\backup\

in effect, you will have unlimited copies of older version of files.

let me know if you have any questions.

As I understand it, in rclone sync when the file is deleted at source, it is automatically removed from the cloud server. Is there any way to do an incremental backup so that the files are never deleted from the server? That way, when a file is accidentally deleted in the source, it is not deleted on the cloud server.

the script i shared with you will do what you want.
forever forward incremental backups.
each time you run the script, just change the date.time to the current date and time.

rclone.exe sync "c:\thefolder" "wasabiwest01:thebucket\thefolder\backup" --backup-dir="wasabiwest01:thebucket\thefolder\archive\date.time"

image
image

It is best to understand what the 3 basic transfer commands are and what they are designed for:

rclone move

This moves the files much like you are familiar with on Windows or Linux. Technically it means copying, then checking the file was transfered ok, then deleting the original.

rclone copy

This copies the file, similar to you know from Windows or Linux. The major difference is that Windows or Linux may ask if you want to overwrite existing files with the same name. By default rclone copy will replace any files that are older (no prompt) and will just skip any files that have the same size+modifiedTime. There are many options to tweak the details of this, but that's the basics.

rclone sync

This does not exist natively on most OSes (in GUI at least). A sync is designed to make one fodler (or tree of folders) be identical on the destination to what it was on the source. So basically, it's a copy + deletion of anything that no longer exists on the source. The benefit of this is that you may want a backup, but you don't want to manually go and clean out old irrelevant files on the backup that you already decided to throw away on the source (your home computer).

A super useful option for sync is to use --track renames.
Usually sync is "dumb" and only mirrors the source to the destination. So what if you renamed the folder on the source and then did a sync? Then you get a new folder on the destination, the old one remains, and you have to re-upload everything in that folder again. If you often reorganize your data this can be pretty inefficient and messy. --track-renames will instead try to compare hashes on all files and see if tan identical file already exists on the destination somewhere else, or under a different name. If it finds one it will just move/rename that file instead which is obviously much faster.

If you want a "sync without deletion" that would be a copy. It will never remove anything (except to update older files), but expect the backup to become quite messy over time.

If you want some level of data-retention then I'd probably suggest using the --backup-folder feature asdffda talks about. This will make any files that should have been deleted or replaced by a never version go into a dedicated backup folder instead. The result is that you can have a clean mirrored and up-to-date sync + an archive of all the old stuff that you can go find files in if there is ever an emergency.

It is worth noting that Wasabi and many other cloud providers also offer something similar on their own, called "file versioning". That basically means they can keep X amount of old files for you for Y amount of time so it is possible to roll back time in case of accidents or special situations. This is pretty much how larger companies that don't use rclone handle the problem of wanting a second-layer of backups.

A lot of cloud services also by default keep deleted files recoverable for a set period (even if you don't use file versioning). Usually you don't even pay for this or have it counted against your quota. It's like the "trash bin" of the cloud system. I don't know the specifics of Wasabi on this but I assume they have something like this. On Google Drive for example all deleted files stay in the "trash bin" for 30 days before they get marked for deletion - unless you specifically ask to purge the trash. So you usually have a basic level of disaster-recovery even with no special setup (as long as you know about it of course). Rclone can often show your trash specifically if you provide a spesific flag.

@asdffdsa Good example, but it would probably be good to script that date to automatically fill in if you don't already. I assume you do (based on how the spesific the time is) but it's not clear from your example.

In either case this is simple to automate, and it does indeed result in a very robust "back in time" archive where you can easily go find something from the past.

The only downside is that it's going to grow pretty fast and take up a lot of space - which means some more cost on most cloud services - but you can always go back and manually clean up stuff you are sure is no longer relevant once in a while.

@thestigma, thanks.
yes it does grow pretty fast but cloud storage is cheap and as needed i prune the files, which is easy for me as i use date.timestamps for archive folders and filenames.

right now, i use wasabi but i am thinking to uses amazon, with glacier and that would be super-cheap.
the thing about wasabi is it has fast hot uploads but no slow cold storage like amazon glacier.

i have a 300+ line python script i created to handle all my different backup needs.
and for all of it, i always use date.time.stamps as part of folder and/or filename.
it scripts VSS.
it scripts rclone.
it can create .7z files and upload to cloud.
it can use fastcopy to copy files from local to local, which has hash checksum feature.

it also works as client or server.
so for each desktop computer that gets backed-up with running veeam agent, files are stored on a local backup server running awesome free windows server 2019 hyper-v edition.

then the local backup server, running that same python script as a daemon uses rclone to copies those large backup files to cloud using VSS.

Remind me to hit you up when my noob-ass gets stuck on scripting then :wink:

File compression is one of those things I am really looking forward to development of.
There is already a "compression remote" project well underway that you can just slot into your rclone remote-chain and have it do it all for you (transparently!).

The one major thing it currently does not have though is a system to combine groups of tiny files where it makes sense (in a transparent way). Most cloud systems currently do not perform well on many tiny files - and a system like that would almost remove that limitation completely - making uploading or downloading 10.000 tiny text files in seconds instead of it taking an hour. It would also save you a lot of files which actually matters on a lot of cloud drives since most have some kind of maximum or at least "recommended for performance" number. Listings would also be massively faster.

So if you are a programmer and you think you can learn to work with Go then that might be an interesting thing to contribute to :smiley:

i would be glad to help you with script but i have no plans to learn go when i can use python for scripting.

the thing is it is quick and easy to use python for scripting and trustworthy command lines programs such as rclone, 7zip and fastcopy.

in python i create on-the-fly .cmd files and use to execute them.

in python source code, i create create the command such as:
RcloneSyncFileCmd = f'{ScriptsDir}\rclone.exe sync "{SourceDir}" {DestDir} {RcloneSyncCmdFlags} {RcloneLogFlags}'
ZipCmd=f'{ScriptsDir}\7za.exe a "{ZipFileName}" "{SourceDir}\" -p{zippwd} {ZipCmdFlags}'

and copy that to a file named rclone.cmd that would look like:
set RcloneSyncFileCmd=C:\data\rclone\scripts\rclone.exe sync "c:\data\rclone\source\one" wasabiwest01:en07-one\backup\ --stats=0 --progress --backup-dir=wasabiwest01:en07-one\archive\20190922.102642\ --log-level DEBUG --log-file=C:\data\rclone\logs\one\20190922.102642\one_20190922.102642_rclone.log
set ZipCmd=C:\data\rclone\scripts\7za.exe a "\\vserver03\en07-rclone\one\one_20190922.102642.7z" "c:\data\rclone\source\one" -pfdsaasdf -bb1 -ms=off
start /w %RcloneSyncFileCmd%
%ZipCmd%

let me know if you have any questions.

Is there any way for rclone to ignore a file or extension? I'm taking too long to scan Thumbs.db or .DS_Store files, they are automatically created on Windows and Mac Osx.

--exclude Thumbs.db --exclude .DS_Store should do it (provided they are both files).

Thanks for the reply, but the backup is returning with a large number of upload errors.
Error "Failed to copy: upload failed: path/disallowed_name/"
What can it be?

what is the exact command you are using?
what are the exact errors you are getting?
you should post the log.

https://rclone.org/docs/#log-file-file

--log-file=log.txt

https://rclone.org/docs/#log-level-level

--log-level=debug

rclone copy E:\ Dropbox:/backup/ --exclude Thumbs.db --exclude .DS_Store --exclude Desktop.ini --fast-list --transfers 20 --dropbox-chunk-size 64M -v -P

when i have a problem, i simplify the command as much as possible and then add one flag at a time.
if you try

rclone copy E:\ Dropbox:/backup/

do you get errors?

I believe the errors started after scripting the exclusions of the Thumbs.db .DS_Store and Desktop.ini files

again, i would try the following command and see what happens, you need to be logical about debugging.

rclone copy E:\ Dropbox:/backup/

i do not use dropbox but for most storage backends, you cannot re-use a bucket name.
if anybody on wasabi has a bucket named backup, no other wasabi user can create a new bucket named backup.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.