Is there a better way to go about this besides server side copying to 'My Drive' then moving them to the crypt through manual reuploading?
I use my own 'shared-drive' with clonebot (gclone) to auto use Service Accounts into the 'shared-drive' as a sort of temp drive due to them not being able to use root folder id's for security reasons.
As most of the media I get now of days is already from a Google Drive Share... (minus nzb / torrents that are auto moved to crypt through radarr/sonarr.)
i know nothing about gclone and the other software.
rclone itself has to crypt the files.
you can use a cheap/free google virtual machine and run rclone on that.
other then that, perhaps @Animosity022 might have a suggestion.
gclone is only used for cloning share folders (serverside) to a teamdrive. as I don't have to configure much of anything Just put in the link and click my directory from the dropdown in the bot. I just self host the bot myself.
Media Cache is the team/sharedrive name Couldn't really think of anything else due to the files staying on there for max an hour before moving them over with the script in the OP.
(Because screenshots aren't really that great due to blurriness I included mostly for 'tree' clarity)
Media Cache - share drive (temp Location)
Media Cache - My Drive (working folder)
Media Library - My Drive (crypt container/folder for plex)
I feel like I'll need more ram as remoting into the desktop environment (even with a 'minimal' Linux desktop install.) sudo apt-get install --no-install-recommends ubuntu-desktop as its taking 3-5 mins to open firefox, and around another 5-10 mins to load a webpage in the vm.
And when I try to ssh back into it after loading up the DE and closing the connection takes a few minutes to connect after putting in the key. So its defiantly being bottlenecked from just idling.
Reason for using the Desktop Enviorment is to get the config files over to the ubuntu vm, sign into google drive etc. So I don't have to upload it to paste bin etc.
yea i figured it out with nano... forgot it exists. reason for linux vm is its on the google cloud platform. I moved over the rclone config with nano however.
i put in a 2gb swap file just incase though.
hmmm I just tried on windows again and i somehow broke my config apparently... Give me one moment.
Never mind I forgot the : config works fine on windows.
so the config file was stuck in temp format... somehow the drive root id wasnt copied over.
Deleting the config and repasting fixed it. https://i.imgur.com/y5ul5JQ.png
I managed to answer my own question for "better way to upload to crypt remotes" using the
vm I can set up 2 additional remotes with 2(4) service accounts to allocate 1.47tb+0.03tb for trickle
so I can use the main account for backups while the 2 service accounts use --drive-impersonate
well... I will probably be self hosting my site on the vm.
yea your def right on the nose: Just don;t know why the speed is so slow however. as it should be using googles bandwith regardless of across configs or not due to it being hosted on there network right? rclone ram usage via htop Very little ram usage. CPU usage could be a bit lower but its fine
when you run these commands, you need to use a debug log and post the output.
not worth to second guess after the fact.
if you run the exact same command using the exact same version of rclone, on two machine and get different results, the odds there is a difference in the machine specs.
most likely due to ram. 32gbs on host & 1gb on vm.
I'll grab a 10gb base file and try running it again with debug log and post the results in a bit however.