Looking for tips for GCC f1-micro rclone copy

Okay so I got all my data off of ACD and onto gdrive. Of course in order to get around the 750gb limit I made three accounts.
Now that I don’t need any local storage I’d like to use an f1-micro instance to copy the remaining data.
This means I only have around 600megabytes of ram, which was no problem, I got around that crash by using rclone -vv --drive-chunk-size 32M copy “” “”
instead of 128M as recommended elsewhere (which crashed instantly).

But I just got a message “Killed”
is there anyway to make rclone restart itself if it’s killed? given that I didn’t have to restart the instance, and even my linux screen -r XXXX command was still working… the VM OS must not have totally shut down and rebooted? I mean heck I still had the terminal backlog which is what showed me that weird:
Killed

So can rclone be told to restart itself if killed? and if not, can anyone write me up a quick “guide” on how to make GCC re-run an rclone copy command after it kills it? I’m on CentOS7 if that somehow matters? I could pick another free GCC OS.

If there’s noway to restart rclone after the instance is pre-empted and “Killed” I might just have to pony up the little bit of extra cash for a 1core cpu (dedicated instead of pre-emptable) I already ate up my 300$ free credit just getting everything off ACD onto GDRIVE as fast as I could with much greedier GCC setups.

edit1:
It’s very weird, rclone is literally chugging along and then instead of a line from rclone saying something about chunks with a timestamp there’s just a plain line:
Killed
yet… the OS is clearly not rebooted because I can still see the previous page of -vv feedback about chunks… in theory I could write a command like… exec 6>&1;output=“go”; while [ “$output” ]; do output=$(rclone -flags crypt1 crypt2 | tee /dev/fd/6); done

a) I’ve probably written that command entirely incorrectly, and b) whatever makes rclone itself “killed” would probably kill my command line while loop? surely one of you has had this problem before though, yes?

edit2:
so I’m testing out
while [ 1 ]; do (rclone -stuff remote); done
but I have a feeling the magical “Killed” will break this as well.

yes only the rclone process is killed on “out of memory”. How many transfers are you using? Ram usage is something like drive-chunk-size * transfers

On dedicated i usually run something like chunk-size 512M and transfers 4, on GCC i was running around 128M but that was with 3.75GB ram on windows. Also on windows it seemed the ram usage could vary even with those settings.

I’m still in the process of moving everything off ACD but so far the amazon api limit is what’s really holding me back…

I used odrive for ACD, also when I ran out of memory I got like two pages of code explaining I was out of memory, I think “Killed” is something specific about my instance being pre-empted? that said now that I’m using --transfers 1 --drive-chunk-size 64MB I haven’t been “Killed” in an hour and a half. (I was using --fast–list --drive-chunk-size 128M --transfers 4(default) though when I was using a 4core dedicated gcc instance, but I was also performing encryption and cryptchecks, whereas right now I’m not doing anything crypt related)

The thing that slowed me down most was the 750GB limit per day to upload to gdrive. I was able to grab 1-2TB per day from ACD before it banned me for a day. That’s why I now have three users on my gsuites account, that’s how I got around the 750GB daily upload limit. Now I’m just looking for the best way to merge those 3users into 1.

Running mulitple f1-micro instances is blindly fast and cheap though because each f1-micro instance is basically it’s own gigabit line.

did you use odrive as well with crypt? i didn’t see a way how to do that. I’m using netdrive now, to mount ACD on windows and then rclone crypt on there. I’m wondering if that does more api requests or something, but i keep getting banned for like 24h it seems and i’m maybe at 1TB down(today), if at all.

Merging the Accounts together is rather simple. Easiest i think is having one main folder on the primary gdrive (gdrive1), share that folder with the other accounts. Those accounts will get mails, you shared a folder with them. Accept und in gdrive interface click right on the folder -> add to my gdrive. Now wait. It can take a long time for the other accounts to completely see everything from the shared folder. For me several hours or all night even.

When everything looks good, move all the files into the shared folder and where ever you want them in there.

On primary account go to Admin -> Apps -> G Suite -> Drive and Docs -> Transfer Ownership -> enter the accounts and you’re set. The only notification you get is an email on primary account, even if it says it failed, it didn’t. But it can also take a long time. At first you’ll see Size on the other gdrive accounts go to zero, but it takes several hours or also the whole night for the primary gdrive to have it’s size updated completetly, it will raise slowly.

My acd was not encrypted. I had 21TB on ACD and it took me roughly 22days to transfer it all off, I got banned by ACD 8-12 times. I tried something similar to your suggestion before, and in fact roughly 2or3TB of my data was transfered in that fashion. However it took a day or two, and there was no feedback whatsoever. I’d much rather use rclone copy then rclone move, rather than a single gsuites please move this for me command. 3TB of the 21TB consists of a harddrive I made a backup of in 2011. I’m hoping to find that harddrive in my storage closet and run a cryptcheck off of it. Because the whole teams/users/owners/transfering gsuites website stuff seemed so buggy and unreliable that I don’t trust that it worked. I discussed this issue in other threads, so I’m not sure I really want to go into it right now.

My current strategy is working pretty well. Either the while loop has worked, or reducing chunk-size and thereby reducing memory usage has worked.

Or at least my instance hasn’t stopped transfering data for the past 2 hours solid. On top of that these instances only have to run about 4-5hours a day in order to hit the google daily quota. It’s just that the quota resets everyday at 7pm and it’s 9pm now. I’ll try to remember to report back tomorrow to see if I have anymore problems with “Killed”. 2hours isn’t long enough to prove anything, since the “Killed” message was only cropping up every 20-40minutes before anyways.

edit3: btw pepsi have you tried using airexplorer? that one comes recommended for windows users of acd.

edit2: the instances stayed operational overnight. I managed to upload my 750GB of data, however rclone never indicated that it had uploaded more than 150GB of data, this mean’s the while loop was successfully triggering new commands of rclone. Now the real question here I would need nick to answer:

Is:
Killed

an rclone error message for a crash? or is it a GCC error message? because usually when I run out of memory in rclone I get like a full page error message, but it’s possible both types of error message are from rclone itself. If rclone in fact can state “Killed” and then end, that means I just need to turn down my chunk-size and the while loop is irrelevant, however if rclone never gives feedback of “Killed” that means a while loop is perfect for f1-micro GCC tasks.