Rclone move from google drive to onedrive stopped wtih message "Killed"

What is the problem you are having with rclone?

Use rclone move command to move files from google drive to onedrive. But it stopped wtih message "Killed".
It often occured transfering big files.
I saw the hard disk reading is high.
I don’t know what happened, only get “killed” message. Please guide me how to resolve it, thanks.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.0

  • os/version: centos 7.9.2009 (64 bit)
  • os/kernel: 3.10.0-1160.59.1.el7.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.8
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive and OnveDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move gd:movies od:movies -vv

The rclone config contents with secrets removed.

Current remotes:

Name                 Type
====                 ====
gd                   drive
od                   onedrive

A log from the command with the -vv flag

2022/04/05 06:25:02 DEBUG Movie1:/1.mp4: Uploading segment 450887680/2437003437 size 10485760
2022/04/05 06:25:02 DEBUG : Movie2/2.mp4: Uploading segment 450887680/1931710196 size 10485760
Killed

Killed usually means you ran out of memory on the system.

What are the specs on the server/machine you are running on?

You can reduce --transfers or --checkers

1 Like

Thanks for your help.
The specs on the machine are

$ sudo lshw -short
H/W path            Device     Class          Description
=========================================================
                               system         Google Compute Engine
/0                             bus            Google Compute Engine
/0/0                           memory         96KiB BIOS
/0/1001                        processor      Intel(R) Xeon(R) CPU @ 2.30GHz
/0/200                         memory         614MiB System Memory
/0/200/0                       memory         614MiB DIMM RAM Synchronous
/0/100                         bridge         440FX - 82441FX PMC [Natoma]
/0/100/1                       bridge         82371AB/EB/MB PIIX4 ISA
/0/100/1.3                     bridge         82371AB/EB/MB PIIX4 ACPI
/0/100/3                       generic        Virtio SCSI
/0/100/3/0          scsi0      generic        Virtual I/O device
/0/100/3/0/0.1.0    /dev/sda   disk           32GB PersistentDisk
/0/100/3/0/0.1.0/1  /dev/sda1  volume         199MiB Windows FAT volume
/0/100/3/0/0.1.0/2  /dev/sda2  volume         29GiB data partition
/0/100/4                       network        Virtio network device
/0/100/4/0          eth0       network        Ethernet interface
/0/100/5                       generic        Virtio RNG
/0/100/5/0                     generic        Virtual I/O device
/0/1                           system         PnP device PNP0b00
/0/2                           input          PnP device PNP0303
/0/3                           input          PnP device PNP0f13
/0/4                           communication  PnP device PNP0501
/0/5                           communication  PnP device PNP0501
/0/6                           communication  PnP device PNP0501
/0/7                           communication  PnP device PNP0501

That looks like you don't have much RAM.

To use less RAM these flags will help

--transfers 1
--checkers 1
--buffer-size 0

You can also check out this FAQ entry: reducing memory usage

1 Like

Thank @Animosity022 and @ncw
I saw the log of machine, the root cause is out of memory then kernel kill process of rclone. Please see the following log.

Apr  7 00:05:43 instance-1 kernel: Out of memory: Kill process 25106 (rclone) score 174 or sacrifice child
Apr  7 00:05:43 instance-1 kernel: Killed process 25106 (rclone), UID 1000, total-vm:832112kB, anon-rss:103904kB, file-rss:0kB, shmem-rss:0kB

I tried the command refered to Optimized for low memory/high bandwidth - #8 by mattzab

rclone move gd:movies od:movies -vv --transfers 1 --checkers 1 --use-mmap

It is better and kept running for 14 hours. But it is still out of memory.

Now I will try the following command to see if it will be better or not.

rclone move gd:movies od:movies --transfers 1 --checkers 1 --buffer-size 0

That's really low memory. Can you add a bit more memory rather than try to fight against it?

Thanks for your suggestion.
This machine is a free VM(virtual machine) in Google Cloud Platform.
If I would like to add more memory, I have to pay the money to upgrade the VM.
At this moment, I have no plan to upgrade it.

I suggest you rethink your whole plan here. First, the google free tier VMs have 1GB memory, I've never run into problems using them even with multiple rclone jobs running. I usually don't even need to include --use-mmap anymore.

You mention that you're using free tier and you don't want to pay money. The fact that you're using a GCC instance to move data from google drive to onedrive, you're already paying for egress over the first gig you transfer. So if you're paying anyway and this is an important move, you should upgrade the box to be able to handle it.

If you intend to transfer everything for free, you can't use GCC at all for transferring gdrive --> onedrive.
I assume that since you're copying a folder called "movies" that it isn't small.

Want free? Use your home connection or bum off a friend. There are also cheap VPS providers out there. I used to use VPSCheap years ago. At the time it was $20/yr for a box with unlimited bandwidth at 1Gib/s speeds or so. Not sure what they offer now or if they're still around, but you could try that or similar if you want cheap.

good points, i was thinking the same.

there is a fellow rcloner that runs rclone on a home router with just 128MiB.

perhaps the OP can flip the script.
use a free vm from azure, might not charge for ingress to onedrive.
tho with micro$oft, need to confirm.

I suggest you rethink your whole plan here. First, the google free tier VMs have 1GB memory, I've never run into problems using them even with multiple rclone jobs running. I usually don't even need to include --use-mmap anymore.

Great! You resolve my another question.
I will rethink my plan and check GCP again.

Right now, this command can keep running at least 24 hours. It works, thanks.

Be very careful as I mistakenly moved data to dropbox and had quite a bill.

I ended up getting it cancelled but almost paid $2k for moving data as the billing doesn't show up right away and you'll most likely get a very large surprise if you aren't sure what the egress out costs per GB..

2 Likes

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.