Rclone mount samba share

I shared a rclone mount ( GoogleDrive ) over samba, the browsing/copy part works, however I can not write to it ( not enough space )

Made a test with sharing a local folder with exactly same samba settings and it works ok.

Mounted with command:
rclone mount --allow-other googledrive:/ /home/administrator/googledrive &

I allowed allow other in /etc/fuse.conf

p.s. Also made FTP access to mounted folder and its working as well.

it’s been mentioned everywhere already - it’s not advisable to write/copy using a mount. The suggested method is to read from those mounts, but use rclone copy/rclone move/rclone sync to copy data there.

Create a samba share to a local directory on the system and have a crontab that picks those files up every so often and uploads them.

I know its not advisable but it seems it works for some, so I wonder is there a setting or something that I could do to make it work.

I too have tested this very thing, but I didn’t have any issue writing to the mount. Windows reported the available space as being 1 petabyte. It’s worth noting that I was using the latest rclone beta, perhaps you should try that out.

The latest beta has a fix for that issue - give that a go.

Tried with latest beta rclone v1.34-23-gc41b67eβ

OS: Ubuntu 16.04 LTS server
Default samba installation and only added two shares in config.

[GoogleDrive]
comment = GoogleDrive Mount
path = /home/administrator/googledrive
browsable = yes
writable = yes
read only = no
guest ok = no
write list = administrator
read list = administrator
valid users = administrator

[Download]
comment = GoogleDrive Mount
path = /home/administrator/download
browsable = yes
writable = yes
read only = no
guest ok = no
write list = administrator
read list = administrator
valid users = administrator

If i try to write in existing text file i will get IO error.
Cant copy files to drive, however browsing and copy and delete works.

p.s. The Download share is just for testing so I could check that samba works ok.

Do you get an error if you try to write to a file which doesn’t exist?

Can you find the corresponding errors in the rclone mount logs?

Where are the rclone logs stored ?
I tried adding log in mount command but nothing is being recorded.
rclone mount --allow-other --log-file=/var/log/rclonemount.log googledrive:/ /home/administrator/googledrive

THey are sent to the console unless you redirect them or use the --log-file option

Looks OK… Does the user you are running rclone mount as have permission to write to /var/log?

Yes and it did not log anything until I added -v option.

Since I updated to latest beta I can copy files to drive, so thats fixed :slight_smile:

There is still problem writing in open file.
I attached the log of the following test:

  1. Copy testinshare.txt ( it had 1 line of content in it eg not empty file) to googledrive root
  2. Opening text file and adding 2nd line of text
    When saving file i received IO error.
    http://pastebin.com/ja21NfZg

Then i added more lines in local file and try to overwrite the one on drive and i received windows

This was added in log
2016/11/25 12:44:12 testinshare.txt: File.Open

I don’t see any errors in that log file from rclone :frowning:

Are there any errors in the samba log file?

Also does it work if you copy the file with explorer rather than save it from the text editor?

Copy works , but overwriting existing one does not.
samba debug log

[2016/11/27 19:41:08.767799,  2] ../source3/smbd/service.c:872(make_connection_snum)
  flix-win10 (ipv4:195.95.158.124:52517) connect to service GoogleDrive initially as user administrator (uid=1000, gid=1000) (pid 1618)
[2016/11/27 19:41:15.844791,  1] ../source3/printing/printer_list.c:234(printer_list_get_last_refresh)
  Failed to fetch record!
[2016/11/27 19:41:15.844850,  1] ../source3/smbd/server_reload.c:69(delete_and_reload_printers)
  pcap cache not loaded
[2016/11/27 19:43:53.908359,  2] ../source3/smbd/open.c:1005(open_file)
  administrator opened file testinshare5.txt read=Yes write=Yes (numopen=2)
[2016/11/27 19:44:04.085398,  1] ../source3/printing/printer_list.c:234(printer_list_get_last_refresh)
  Failed to fetch record!
[2016/11/27 19:44:04.085471,  1] ../source3/smbd/server_reload.c:69(delete_and_reload_printers)
  pcap cache not loaded
[2016/11/27 19:44:05.360355,  2] ../source3/smbd/close.c:780(close_normal_file)
  administrator closed file testinshare5.txt (numopen=2) NT_STATUS_OK
[2016/11/27 19:44:14.307495,  2] ../source3/smbd/open.c:1005(open_file)
  administrator opened file testinshare5.txt read=No write=No (numopen=2)
[2016/11/27 19:44:14.308918,  2] ../source3/smbd/close.c:780(close_normal_file)
  administrator closed file testinshare5.txt (numopen=1) NT_STATUS_OK
[2016/11/27 19:44:14.329947,  2] ../source3/smbd/open.c:1005(open_file)
  administrator opened file testinshare5.txt read=No write=No (numopen=1)
[2016/11/27 19:44:14.331260,  2] ../source3/smbd/close.c:780(close_normal_file)
  administrator closed file testinshare5.txt (numopen=0) NT_STATUS_OK

tested with samba log level 3

[2016/11/27 19:52:23.296404,  3] ../source3/smbd/open.c:881(open_file)
  Error opening file testinshare6.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)
[2016/11/27 19:52:23.298015,  3] ../source3/smbd/open.c:881(open_file)
  Error opening file testinshare6.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)
[2016/11/27 19:52:23.299697,  3] ../source3/smbd/open.c:881(open_file)
  Error opening file testinshare6.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)

Thanks for testing that out.

Here is the code from: http://code.metager.de/source/xref/samba/source3/smbd/open.c

874		/*
874		 * Actually do the open - if O_TRUNC is needed handle it
875		 * below under the share mode lock.
876		 */
877		status = fd_open_atomic(conn, fsp, local_flags & ~O_TRUNC,
878				unx_mode, p_file_created);
879		if (!NT_STATUS_IS_OK(status)) {
880			DEBUG(3,("Error opening file %s (%s) (local_flags=%d) "
881				 "(flags=%d)\n", smb_fname_str_dbg(smb_fname),
882				 nt_errstr(status),local_flags,flags));
883			return status;
884		}

flags=578 is 0x242 which means that samba opened the file in O_RDWR | O_CREATE | O_TRUNCATE

This isn’t something rclone supports (or can ever support) since none of the cloud providers allow that both reading and writing to files at the same time.

However I could work-around it in rclone by opening files that were opened in O_RDWR | O_TRUNCATE | O_CREATE and ignore the read part. If I return a write file handle then the user would get an error on reading.

I made a beta with this in for you to try: http://beta.rclone.org/v1.34-38-g7929b6e/ (will be uploaded in 15-30 minutes)

file not found, will check a bit later again.

Forgot to push it… Hopefully it will be there in 15-30 mins depending on how well Travis is performing tonight…

Still I/O error
administrator@sambaftp ~ $ rclone version
rclone v1.34-38-g7929b6eβ

Samba log
[2016/11/28 22:11:47.762217, 3] …/source3/smbd/open.c:881(open_file)
Error opening file betatest.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)
[2016/11/28 22:11:47.765116, 3] …/source3/smbd/open.c:881(open_file)
Error opening file betatest.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)
[2016/11/28 22:11:47.770897, 3] …/source3/smbd/open.c:881(open_file)
Error opening file betatest.txt (NT_STATUS_IO_DEVICE_ERROR) (local_flags=578) (flags=578)
[2016/11/28 22:11:58.250995, 2] …/source3/smbd/service.c:1148(close_cnum)
flix-win10 (ipv4:195.95.158.124:58805) closed connection to service GoogleDrive
[2016/11/28 22:11:58.323067, 3] …/source3/smbd/server_exit.c:252(exit_server_common)
Server exit (NT_STATUS_CONNECTION_RESET)

rclone log when copy started
2016/11/28 22:11:47 betatest.txt: File.Open
2016/11/28 22:11:47 betatest.txt: File.Open
2016/11/28 22:11:47 betatest.txt: File.Open

You could try putting a layer between rclone and your mount like unionfs. That might help samba. Writes would go to a local file system and reads would continue down to the bottom layer.

Samba

  • Unionfs
    • RW-LOCAL
    • RO-ACD

Yea but for that I would need to have a lot of disk space that would act as buffer (while moving them with cron) and there would be problems if users started to edit files that would be uploading at same time.

The point would be to have infinity drive that would directly write to cloud.

Yes… I do similar and then I just periodically push the changes down from local to cloud. But yes, you would need space for writes until the next push.