Help: We are so close to supporting the new backend FileJump! But I need some help!

Hi!

I created a new backend FileJump in my free time. It was hard work! Here is the Github-Issue.

Listing the folder works, for example with
./rclone lsjson filejump:
I also implemented all the other methods of the backend, but there are still some errors. It would be great, if someone with experience could take a look at them and help me out!!

We are so close to supporting the new backend FileJump :blush:

By the way, if you want to check the Api, you can just go to filejump.com and check the network connections in the developer tools while you do something. The requests are very easy to understand.

1 Like

Can you be more specific for the error you encountered?

For example, if I want to copy a file to filejump, I get this log:

$ ./rclone copy -vv /tmp/maaa filejump:/
2024/10/16 19:36:13 DEBUG : rclone: Version "v1.69.0-DEV" starting with parameters ["./rclone" "copy" "-vv" "/tmp/maaa" "filejump:/"]
2024/10/16 19:36:13 DEBUG : Creating backend with remote "/tmp/maaa"
2024/10/16 19:36:13 DEBUG : Using config file from "/Users/masr/.config/rclone/rclone.conf"
2024/10/16 19:36:13 DEBUG : fs cache: renaming child cache item "/tmp/maaa" to be canonical for parent "/tmp"
2024/10/16 19:36:13 DEBUG : Creating backend with remote "filejump:/"
2024/10/16 19:36:13 DEBUG : fs cache: renaming cache item "filejump:/" to be canonical "filejump:"
2024/10/16 19:36:13 NOTICE: maaa: Failed to read metadata: object not found
2024/10/16 19:36:13 DEBUG : maaa: Modification times differ by -2562047h47m16.854775808s: 2024-10-15 21:08:26.46617499 +0200 CEST, 0001-01-01 00:00:00 +0000 UTC
2024/10/16 19:36:15 NOTICE: : Failed to read metadata: is root directory
2024/10/16 19:36:15 INFO  : maaa: Copied (replaced existing) to: 
2024/10/16 19:36:15 INFO  : 
Transferred:   	          5 B / 5 B, 100%, 5 B/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:         1.7s

2024/10/16 19:36:15 DEBUG : 11 go routines active

It says "Failed to read metadata: object not found", but the local object is actually found, it has 5 Bytes:

$ rclone ls /tmp/maaa
        5 maaa

It tells me that the file is copied and that an existing file was replaced, but there was no file or folder called maaa on my filejump account. And it also looks like the copying worked, but when I check on the website, the file is not there.

It seems throwing fs.ErrorObjectNotFound from function readMetaDataForPath(). I would start from this.

I just found out, that the error fs.ErrorObjectNotFound appears, because the file was not found in filejump:/ , so this is correct! I want to copy a new file to filejump:/ , but I don't understand, why rclone tries to replace an existing file. The file is new and was not on filejump:/ before:

2024/10/16 19:36:15 INFO  : maaa: Copied (replaced existing) to: 

And I don't understand why the copying was successful, but the file is actually not copied, because it doesn't appear in my FileJump drive.

can you post rclone lsl for the source file and dest file?

can you post the output of rclone backend features filejump: ?


do you work for filejump?
impossible to do any testing, there is no free plan?
given the company is just a few months old, would make sense to offer a free plan.

why would you/they bother to offer an api at all?
just add support for real, stable, protocols that rclone, filezilla, and such tools can use, such as webdav, sftp, etc..

No, I don't work for FileJump. I just bought a 2TB lifetime plan for just 89$ and want to make rclone work for me and for others.

$ ./rclone lsl /tmp/empty
        0 2024-10-15 21:07:14.344399893 empty
$ ./rclone lsl filejump:/
    85480 2024-10-16 05:13:32.000000000 test
    85480 2024-10-17 19:55:17.000000000 testfile
    85480 2024-10-17 19:55:40.000000000 test/test
    85480 2024-10-17 19:55:45.000000000 test/test2
 15419033 2024-10-17 19:55:52.000000000 test/test/test3
$ ./rclone lsl filejump:/empty
$ ./rclone backend features filejump:
{
	"Name": "filejump",
	"Root": "",
	"String": "filejump root ''",
	"Precision": 1000000000,
	"Hashes": [],
	"Features": {
		"About": false,
		"BucketBased": false,
		"BucketBasedRootOK": false,
		"CanHaveEmptyDirectories": true,
		"CaseInsensitive": false,
		"ChangeNotify": false,
		"ChunkWriterDoesntSeek": false,
		"CleanUp": false,
		"Command": false,
		"Copy": false,
		"DirCacheFlush": false,
		"DirModTimeUpdatesOnWrite": false,
		"DirMove": false,
		"DirSetModTime": false,
		"Disconnect": false,
		"DuplicateFiles": false,
		"FilterAware": false,
		"GetTier": false,
		"IsLocal": false,
		"ListR": false,
		"MergeDirs": false,
		"MkdirMetadata": false,
		"Move": false,
		"NoMultiThreading": false,
		"OpenChunkWriter": false,
		"OpenWriterAt": false,
		"Overlay": false,
		"PartialUploads": false,
		"PublicLink": false,
		"Purge": false,
		"PutStream": false,
		"PutUnchecked": true,
		"ReadDirMetadata": false,
		"ReadMetadata": false,
		"ReadMimeType": false,
		"ServerSideAcrossConfigs": false,
		"SetTier": false,
		"SetWrapper": false,
		"Shutdown": false,
		"SlowHash": false,
		"SlowModTime": false,
		"UnWrap": false,
		"UserDirMetadata": false,
		"UserInfo": false,
		"UserMetadata": false,
		"WrapFs": false,
		"WriteDirMetadata": false,
		"WriteDirSetModTime": false,
		"WriteMetadata": false,
		"WriteMimeType": false
	},
	"MetadataInfo": null
}

There is a free plan! I created credentials for testing:
User: muagli57@stealthemails.top
Password: filejumptest

But for my rclone config, you just need the api-key:
46|MCDoTRerNwfSJ9KrmznkD8eWUWjZuNSwtrbtBsUx922b291b

And copy some files via website

Yes, ftp is on their roadmap.

They internally use the wasabi file storage, I chatted with the support. The support is available very fast directly on their website.

Uploading a file works now! I updated my github project. But I still have to add the logic for creating a directory.

Creating a directory and syncing via
./rclone sync /tmp/tosync/ filejump:/tosync
works now with the latest push, but if I sync, the files are copied twice with the same name. I have to find out, why they are copied twice.