Jottacloud: how well is it working with rclone?

Hello everyone,

I'm looking for options to migrate my data from Google Drive before my EDU account goes bust[1], and I just became aware of Jottacloud: https://www.jottacloud.com/en/pricing.html; I was very happy to learn that rclone supports it with the Jottacloud backend.

$7.95/mo for "unlimited" looks like a good price, but before I waste time and money on it, I would like to ask you all:

  1. How well does it work with rclone? How much data are you storing there?

  2. What happens when you exceed the 5TB of used storage they mention on the 'pricing' page above? It mentions "upload speed is reduced", is that all? How much do they reduce it to?

  3. Any other info/tips/warnings/etc for someone looking to migrate from Google Drive to Jottacloud?

Thanks in advance for your responses.

[1] Long story short: Google notified my alma mater they are going to limit their total space to just 100TB, they have thousands of users and I alone have multiple TBs stored there.

Cheers,
-- Durval.

Regarding upload speed reduction, see this (especially the list at the end): Reduced upload speed | Jottacloud Help Center

I am also curious about this answer since I have considered it. The upload speed reduction bothers me on principal since I'd rather they just not be unlimited but I have well below that number.

Some thoughts, though I have not tried it at all:

  • How are their speeds in general. It says they are based in Norway. That is far from me (pro and con) and that could slow things down
  • They are actually have a really capable API based on rclone's overview. For me, the following are the most important features (in order); Copy (or Move), ModTime, listR, Purge.
  • Its API does have a few limitations but none of them are too bad for my use case: 255 character limit (this may get tough but not if I use restic or something), not stream upload (meh, don't care too much), and potentially slow listings.

Like I said, I will be watching this to see if you get more answers. One thing that gives me pause is that ti isn't as popular in this community but that may change.

Hi @jwink3101 @albertony, thanks for the feedback.

Jottacloud's listed speed reduction makes it practically unusable for the volume of data that I have, so I would need to use (and pay for) more than one account, and combine them (with a rclone's union remote) in order to get usable speed. In my present case, that would mean 3 accounts.

I have been experimenting with rclone's union remote for some time now and having all kinds of issues (I've just posted a specific topic in this regard) so I'm not sure whether it would work well or not with Jottacloud.

As it seems nobody else here has/is willing to contribute more info, I guess my next step is to try and create a few free (5GB-limited) accounts on Jottacloud and try to string them together with an union remote, and see how well they work.

Will keep this thread posted.

Cheers,
-- Durval.

Ah! So you have a paid account with them? What is it, a "personal" EUR 8/mo account? Can you please tell us how much data you have there, and the kind of speeds you get?

Re: their API completeness and limitations: I've noticed both, and been happy with the former and not so happy with the latter.

Case in point: if "stream upload" means what I think it means, Jottacloud's not supporting it it would deny me something I currently do a lot, which is tar backups like tar czpf - . | rclone rcat REMOTE:file.tar.gz. Guess this will be one of the first things I will try with them.

I do not have any account with them. I am only considering it like OP

Re: stream upload

My understanding from the docs, but not tested, is that you can do that but it will first do it to the disk. So if your temp space is insufficient then you may not be able to do it. It would be the same as you doing:

tar czpf - . > tmp
rclone copyto tmp REMOTE:file.tar.gz

(though my first line may make an infinite loop. I am not sure)

@jwink3101 thanks for the clarifications.

hi,

with jottacloud, based on their website, there is no mention, that i can find, on quotas and speeds to download/stream data/media???

what is the use-case for your data?
backup, media files to be play with a media server or what?

of the 10TB+ of data, how much of it do you need to access on a daily, weekly, monthly, yearly basis?

just to share what i do.

i need storage mostly for backups that i hope to never access, only in a major ransomware disaster.
and small amount of media files for streaming.

for the backups,
i keep most most recent data, such as veeam backups and data files, in wasabi, a s3 rclone known for hot storage, $6/TB/month
older data goes to aws s3 deep glacier, $1/TB/month

and for my personal media needs, some goes to wasabi, that i want to keep long term.
some is downloaded to and streamed from a seedbox, that i plan to watch once and can than delete.

Hello @asdffdsa

I haven't seen anything "official" either. But I have some data (see my next post).

Not sure whether you're asking me or @jwink3101 or both, but responding anyway: my use case is mainly archival and backup. In other words, basically copies of data on my machines in case they go bust, and archival of stuff I don't plan on accessing frequently enough to keep on the machines themselves. Data is uploaded frequently (many times a week and in small doses), and rarely read back unless I need to restore lost data (except every 6 months, when I read back everything for verification purposes, checking both existence and content with md5sum -c ).

Yeah, sometimes I need to read back something from the 'archive' part too, so our usages are pretty similar (main difference being that I never stream -- even if it's a media file, I copy it to local storage before using it, so read speed is not so important for me).

I've seen a bunch of providers with prices around $5/TB, so you could save some on Wasabi -- let me know if you need more details (not posting here to avoid getting OT).

I wasn't aware Glacier was so cheap. $1/TB/month (presuming USD) is cheaper than even Jottacloud (which would cost in my case[1] $24 EUR / 27 TB every month = ~$0.89 EUR =~ $1.05 USD/TB/month (at the current EURUSD exchange rate).

OTOH, I remember reading Glacier was unreasonably expensive if you ever need to download your data -- like, ruinously so. Aren't you worried about that? That's a major no-no for me (I would feel like my data was being held hostage), and that's the reason I stopped following Glacier prices etc quite some time ago.

Cheers,
-- Durval.

[1] $24 EUR would be for 3 Jottacloud accounts at $8 EUR each, that joined in an rclone union would give me reasonable upload speed for the volume of data that I have.

every use case if different.

with jottacloud, not knowing about download quotas and speeds limits, i could not trust that service if i suffered some data disaster.

not worried at all.
i run a bunch of servers at different locations connected over a vpn.
for any given backup server, there is a copy of the files, including veeam backup files, distributed on one or more backup servers.
i keep recent data in wasabi, which for download, can saturate a 1Gbps internet connection.

if somehow, multiple backup servers failed, and the backup in wasabi failed, and i have to get the data from aws s3 deep glacier, that would be ok.
been 10 years using this setup and never needed to download from aws.

you can download the data at any time, there is a delay before you can get access to it tho and a cost to do that.
but if you have a server used by 2,000 employees nation-wide, 24x7x365, and as a last resort, making a one-time payment to aws for the data, in comparison to just $1/TB/month, that is a bargain

And here it is:

  1. I created 4 free accounts on the Jottacloud website (one for me and the others for other family members which I asked to help me in the testing, so as to hopefully avoid any TOS violations);
  2. I used rclone config to create 4 remotes, one for each account, accessing https://www.jottacloud.com/web/secure to obtain the "personal login token" as instructed;
  3. I created an rclone union remote JCUNION with upstreams = to these 4 remotes, and create_policy = mfs (all other union settings left unspecified -- ie, at default).
  4. Finally, I created an encrypted remote ENCJCUNION on top of JCUNION, again accepting all rclone defaults.

I then ran rclone copy LOCALDIR ENCJCUNION: to copy a 72-file, ~16GiB local directory tree to the encrypted Jottacloud union. It ran without any issues and took 4161 seconds , so that's a not-too-unreasonable 3.93MB/s upload speed.

Finally, I mounted the remote with rclone mount ENCJCUNION: ~/ENCJCUNION/ --low-level-retries=1000 --dir-cache-time 10m --max-read-ahead 256k --vfs-cache-mode=writes and proceeded to download and verify all the data with cd ~/ENCJCUNION; md5sum -c ENCJCUNION.md5.

The good news on the download part is that I got no errors: no missing files and no corrupted data.

The bad news is that it took 10458 seconds to do so, therefore the download speed was a ludicrous 1.56MB/s (way less than half the upload speed).

Perhaps they limit the download speed lower than the upload speed? Perhaps just for the free accounts? Anyway, it doesn't bode well. Not sure I'm willing to spend the money on the paid accounts just to find out they are just as bad.

Cheers,
-- Durval.

Agreed, every use case is unique. I can see how paying Glacier's 'ransom' to get your data back would not be so bad in your case.

Thanks for the clarifications.

Cheers,
-- Durval.

if you do decide to use jottacloud, keeps us posted.
very curious as to how downloads are handled, in terms of quotas and speeds.

Sending large files to Jottacloud has always given me time-out problems.

If you are going to pay $7.95/month then you might want to consider Office365 for $6.99 per month you get:

  • 1TB of storage
  • free copies of word, excel, powerpoint, outlook, skype-minutes

Or get the family plan for $9.99/month and get all that for 6 users (each user gets 1TB). Using the rclone union allows you to put all 6x1TB together into one 6TB cloud drive.

I am not a Windows fan, but cloud drive works pretty well (plus I need the office apps on my Mac anyways). I currently have close to 600GB stored on azure.

Another alternative is to use S3. Their infrequently accessed tier is really cheap but a little trickier to set up. Office 365 is very easy to install. It also syncs all my files between all my computers.

1 Like

hi,
once you get that large file uploaded, what is the download like?
are you able to max out your internet connection?

1 Like

Can you please give us more info on what 'large' means in your context? Like, each one sized multi-TB, hundreds/tens/couple of GBs?

My experience so far points to issues in downloading, not uploading (and then speed-only, not timeouts).

I am using restic / rclone to a jottacloud backend. Very few errors; they get corrected by rclone/restic, except for one error as described in Http2 stream closed / connection reset / context canceled - #10 by MichaelEischer - Getting Help - restic forum
I am waiting for a fix in the go library..

Currently about 3.6TB on jottacloud. As far as uploadspeed, I get around 10MB/sec (on a 60MB/s connection), but I have the feeling that the bottleneck is restic running on an arm processor.

I don't have big data as far as download-speed. At every single backup (every night), my script tries to download and verify one single random file and with that there are no issues.

1 Like

jottacloud free tier and rclone on linux vps in germany.

download and upload speeds 60-90 MiB/s
2 weeks testing, no i/o errors so far.

three use methods: daily restic backups; rclone mounts as systemd mount units; as docker volume.
average restic pack ~100mb, on-mount files - couple 2gb movies, docker files - small data files regularly rewritten by containers.

the main problem is very short lifetime of access/refresh tokens. if one of accessing methods refreshes token and others don't access within an hour, they lose token forever.

as a workaround, I pointed docker plugin, restic and mount unit to a single rclone.conf on single box so they use and update single jotta token, then additionally set docker swarm constraints to bind jotta-using services on that single box.

1 Like

The problem with tokens getting out of sync between rclone (docker plugins) instances can be solved by synching rclone.conf across nodes.

possible ways:

  1. systemd .path units + rsync
  2. inotify cli + rsync
  3. syncthing
  4. etc

future idea:
rclone rest api has got an endpoint to update rclone.conf section values. why not use it?
we can run every involved rclone instance with --rcd flag and note its ip and port. we just need a new command-line flag --config-to http://ip1:port1,http://apiuser:apipass@ip2:port2,....
Every time rclone config system detects a config value change, it will spawn a concurrent goroutine to send rclone api requests spreading the change (obeying --timeout and --contimeout).

@ncw do you like the idea?

1 Like

I'm going on with testing of concurrent jottacloud use on multiple hosts (dockers and mounts).

@ncw
here is rclone patch that implements --config-to user:pass@host:5572 ... and sends config keys out to other rclones right upon change (with some tricks to prevent loops)

@durval
here is rclone binary to try out --config-to (is anybody interested?)

@ncw @buengese
i've seen situations with jotta client losing token for 2-3 times this week. on most backends refreshing in parallel processes with independent conf files works fine in spite of randomness in refresh moments. why does it result in forever stuck token refreshers for jotta? i'm starting to suspect a bug in the jotta backend.
interestingly, even if I update token in rclone.conf, something stays broken in vfs cache resulting in stuck vfs cache threads (watch out goroutine 31 below) and jotta token refreshers (goroutines 60, 16):

the only way to fix it is to rm -rf vfs vfsMeta and restart rclone (or disable then enable docker plugin). I don't know who is guilty - jotta backend, plugin container, or my patches. it needs more experiments. if you had time to look at stack traces it might ring a bell for you...

@ncw
as you can see, the vfs cache got stuck waiting for downloaders and blocked vfs.New which is called from mountlib.Mount. interestingly, none of these accepts a context argument so cannot be officially
cancelled by timeout. is this design deliberate?

in the docker plugin things get worse as I keep all mounts in a list guarded by mutex, so the stuck mount blocks all further api requests from docker (including requests to remove the broken volume!). docker gets steadily mad (it retries requests, all of them get stuck on the mutex, see traces marked by "semacquire" above).

as I don't have an official way to cancel Mount after timeout, I added a kludge in the docker plugin code:

Now I run mount in a goroutine. if it doesn't return before timeout, I return a error to docker and mark the affected volume as stuck. any further attempts to mount it will fail immediately (while mad Mount is blocking the corrupted volume's vfs structures in background). i only let docker users to remove stuck volumes. the evil is isolated, docker api is not mad anymore. users can now remove then recreate broken volume.
(but due to possible vfs/jotta bug, removing disk cache is recommended too).

to be continued

1 Like