I am also curious about this answer since I have considered it. The upload speed reduction bothers me on principal since I'd rather they just not be unlimited but I have well below that number.
Some thoughts, though I have not tried it at all:
How are their speeds in general. It says they are based in Norway. That is far from me (pro and con) and that could slow things down
They are actually have a really capable API based on rclone's overview. For me, the following are the most important features (in order); Copy (or Move), ModTime, listR, Purge.
Its API does have a few limitations but none of them are too bad for my use case: 255 character limit (this may get tough but not if I use restic or something), not stream upload (meh, don't care too much), and potentially slow listings.
Like I said, I will be watching this to see if you get more answers. One thing that gives me pause is that ti isn't as popular in this community but that may change.
Jottacloud's listed speed reduction makes it practically unusable for the volume of data that I have, so I would need to use (and pay for) more than one account, and combine them (with a rclone's union remote) in order to get usable speed. In my present case, that would mean 3 accounts.
As it seems nobody else here has/is willing to contribute more info, I guess my next step is to try and create a few free (5GB-limited) accounts on Jottacloud and try to string them together with an union remote, and see how well they work.
Ah! So you have a paid account with them? What is it, a "personal" EUR 8/mo account? Can you please tell us how much data you have there, and the kind of speeds you get?
Re: their API completeness and limitations: I've noticed both, and been happy with the former and not so happy with the latter.
Case in point: if "stream upload" means what I think it means, Jottacloud's not supporting it it would deny me something I currently do a lot, which is tar backups like tar czpf - . | rclone rcat REMOTE:file.tar.gz. Guess this will be one of the first things I will try with them.
I do not have any account with them. I am only considering it like OP
Re: stream upload
My understanding from the docs, but not tested, is that you can do that but it will first do it to the disk. So if your temp space is insufficient then you may not be able to do it. It would be the same as you doing:
tar czpf - . > tmp
rclone copyto tmp REMOTE:file.tar.gz
(though my first line may make an infinite loop. I am not sure)
I haven't seen anything "official" either. But I have some data (see my next post).
Not sure whether you're asking me or @jwink3101 or both, but responding anyway: my use case is mainly archival and backup. In other words, basically copies of data on my machines in case they go bust, and archival of stuff I don't plan on accessing frequently enough to keep on the machines themselves. Data is uploaded frequently (many times a week and in small doses), and rarely read back unless I need to restore lost data (except every 6 months, when I read back everything for verification purposes, checking both existence and content with md5sum -c ).
Yeah, sometimes I need to read back something from the 'archive' part too, so our usages are pretty similar (main difference being that I never stream -- even if it's a media file, I copy it to local storage before using it, so read speed is not so important for me).
I've seen a bunch of providers with prices around $5/TB, so you could save some on Wasabi -- let me know if you need more details (not posting here to avoid getting OT).
I wasn't aware Glacier was so cheap. $1/TB/month (presuming USD) is cheaper than even Jottacloud (which would cost in my case $24 EUR / 27 TB every month = ~$0.89 EUR =~ $1.05 USD/TB/month (at the current EURUSD exchange rate).
OTOH, I remember reading Glacier was unreasonably expensive if you ever need to download your data -- like, ruinously so. Aren't you worried about that? That's a major no-no for me (I would feel like my data was being held hostage), and that's the reason I stopped following Glacier prices etc quite some time ago.
 $24 EUR would be for 3 Jottacloud accounts at $8 EUR each, that joined in an rclone union would give me reasonable upload speed for the volume of data that I have.
with jottacloud, not knowing about download quotas and speeds limits, i could not trust that service if i suffered some data disaster.
not worried at all.
i run a bunch of servers at different locations connected over a vpn.
for any given backup server, there is a copy of the files, including veeam backup files, distributed on one or more backup servers.
i keep recent data in wasabi, which for download, can saturate a 1Gbps internet connection.
if somehow, multiple backup servers failed, and the backup in wasabi failed, and i have to get the data from aws s3 deep glacier, that would be ok.
been 10 years using this setup and never needed to download from aws.
you can download the data at any time, there is a delay before you can get access to it tho and a cost to do that.
but if you have a server used by 2,000 employees nation-wide, 24x7x365, and as a last resort, making a one-time payment to aws for the data, in comparison to just $1/TB/month, that is a bargain
I created an rclone union remote JCUNION with upstreams = to these 4 remotes, and create_policy = mfs (all other union settings left unspecified -- ie, at default).
Finally, I created an encrypted remote ENCJCUNION on top of JCUNION, again accepting all rclone defaults.
I then ran rclone copy LOCALDIR ENCJCUNION: to copy a 72-file, ~16GiB local directory tree to the encrypted Jottacloud union. It ran without any issues and took 4161 seconds , so that's a not-too-unreasonable 3.93MB/s upload speed.
Finally, I mounted the remote with rclone mount ENCJCUNION: ~/ENCJCUNION/ --low-level-retries=1000 --dir-cache-time 10m --max-read-ahead 256k --vfs-cache-mode=writes and proceeded to download and verify all the data with cd ~/ENCJCUNION; md5sum -c ENCJCUNION.md5.
The good news on the download part is that I got no errors: no missing files and no corrupted data.
The bad news is that it took 10458 seconds to do so, therefore the download speed was a ludicrous 1.56MB/s (way less than half the upload speed).
Perhaps they limit the download speed lower than the upload speed? Perhaps just for the free accounts? Anyway, it doesn't bode well. Not sure I'm willing to spend the money on the paid accounts just to find out they are just as bad.
Sending large files to Jottacloud has always given me time-out problems.
If you are going to pay $7.95/month then you might want to consider Office365 for $6.99 per month you get:
1TB of storage
free copies of word, excel, powerpoint, outlook, skype-minutes
Or get the family plan for $9.99/month and get all that for 6 users (each user gets 1TB). Using the rclone union allows you to put all 6x1TB together into one 6TB cloud drive.
I am not a Windows fan, but cloud drive works pretty well (plus I need the office apps on my Mac anyways). I currently have close to 600GB stored on azure.
Another alternative is to use S3. Their infrequently accessed tier is really cheap but a little trickier to set up. Office 365 is very easy to install. It also syncs all my files between all my computers.
jottacloud free tier and rclone on linux vps in germany.
download and upload speeds 60-90 MiB/s
2 weeks testing, no i/o errors so far.
three use methods: daily restic backups; rclone mounts as systemd mount units; as docker volume.
average restic pack ~100mb, on-mount files - couple 2gb movies, docker files - small data files regularly rewritten by containers.
the main problem is very short lifetime of access/refresh tokens. if one of accessing methods refreshes token and others don't access within an hour, they lose token forever.
as a workaround, I pointed docker plugin, restic and mount unit to a single rclone.conf on single box so they use and update single jotta token, then additionally set docker swarm constraints to bind jotta-using services on that single box.
The problem with tokens getting out of sync between rclone (docker plugins) instances can be solved by synching rclone.conf across nodes.
systemd .path units + rsync
inotify cli + rsync
rclone rest api has got an endpoint to update rclone.conf section values. why not use it?
we can run every involved rclone instance with --rcd flag and note its ip and port. we just need a new command-line flag --config-to http://ip1:port1,http://apiuser:apipass@ip2:port2,....
Every time rclone config system detects a config value change, it will spawn a concurrent goroutine to send rclone api requests spreading the change (obeying --timeout and --contimeout).