Skip to content

git-lfs: Upload never terminates, although file size already exceeded  #17311

Open
@schmittlauch

Description

@schmittlauch
Contributor

Gitea Version

1.14.6

Git Version

2.31.1

Operating System

NixOS

How are you running Gitea?

I run gitea as set up by the NixOS module, that means built from source and set up as a systemd service.

Database

MySQL

Can you reproduce the bug on the Gitea demo site?

No¹

Log Gist

https://gist.github.com/schmittlauch/0b6f47c899daa277b4d9df7d2861fe97

Description

I added a new 6.5GiB large file to my git repo via git lfs and tried to push the repository to my gitea remote repository, either via git push or git lfs push.

The upload started normally. After a while I discovered though that far more data has already been uploaded than the file size is supposed to be:

$ git push mygit mainline
loading LFS objects:   0% (0/1), 9.2 GB | 5.0 MB/s 

The point of time where the upload size exceeded the actual file size is around the time when the Error: unexpected EOF message has been logged in the gitea log.

The error happened reproducibly at several attempts, I always had to cancel the upload process manually.

The same repo – including the LFS files – has successfully been pushed to a GitLab server.

¹Due to the large file size, I did not attempt to reproduce this issue on try.gitea.io.

Activity

schmittlauch

schmittlauch commented on Oct 15, 2021

@schmittlauch
ContributorAuthor

This issue has previously been described in the forum, without finding a proper solution: https://discourse.gitea.io/t/solved-git-lfs-upload-repeats-infinitely/635/3

I personally have tried the config changes suggested there, these do not help, and I also fail to see how each of these config params could be connected to this issue.

lunny

lunny commented on Oct 15, 2021

@lunny
Member

I have tested locally with 5636204 and cannot reproduce it.

lunny

lunny commented on Oct 15, 2021

@lunny
Member

What's your LFS storage configuration?

schmittlauch

schmittlauch commented on Oct 15, 2021

@schmittlauch
ContributorAuthor

The configuration is nothing unusual. Excerpt:

[server]
DISABLE_SSH=false
DOMAIN=git.orlives.de
HTTP_ADDR=0.0.0.0
HTTP_PORT=3000
LFS_CONTENT_PATH=/srv/git/lfs-data
LFS_HTTP_AUTH_EXPIRY=120m
LFS_JWT_SECRET=s98dH5q8f4szeTw9EN8WQVVmD3VluKwzCFcmdbe1pOM
LFS_START_SERVER=true
OFFLINE_MODE=true
ROOT_URL=https://git.orlives.de/
SSH_PORT=2342
STATIC_ROOT_PATH=/nix/store/gmdwyjgfld7vvw678x3f1q2ccx97g0h9-gitea-1.14.6-data

[service]
DISABLE_REGISTRATION=true
ENABLE_NOTIFY_MAIL=true

[session]
COOKIE_NAME=session
COOKIE_SECURE=true
PROVIDER=file
schmittlauch

schmittlauch commented on Oct 15, 2021

@schmittlauch
ContributorAuthor

It might be worth a note that the gitea instance runs behind an http2-enabled nginx reverse proxy. client_max_body_size is set to 0, that shouldn't be an issue.

schmittlauch

schmittlauch commented on Oct 16, 2021

@schmittlauch
ContributorAuthor

And yes, I have discovered that I just published the LFS JWT, it is changed now /0\

simonwu-os

simonwu-os commented on May 16, 2022

@simonwu-os

I encouter this problem today.
And at last I fixed this problem by changing nginx config to enlarge keepalive timeout.
keepalive_timeout = 100;
send_timeout 100;

theobjectivedad

theobjectivedad commented on Oct 15, 2022

@theobjectivedad

I am experiencing a similar issue, again from behind a nginx proxy running in k8s. I see messages similar to these repeating in the gitea logs during upload:

2022-10-15 09:36:19	
kube02 .. 2022/10/15 14:36:19 ...rvices/lfs/server.go:323:func1() [E] [634ac543] Error putting LFS MetaObject [f8eaf0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64] into content store. Error: Put "https://s3.kube/gitea/lfs/f8/ea/f0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64?partNumber=11&uploadId=2~DD9lkoTTgIbOLrtJBMIvRdhPLy7rpbz": Connection closed by foreign host https://s3.kube/gitea/lfs/f8/ea/f0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64?partNumber=11&uploadId=2~DD9lkoTTgIbOLrtJBMIvRdhPLy7rpbz. Retry again.

2022-10-15 09:36:19	
kube02 .. 2022/10/15 14:36:19 ...lfs/content_store.go:75:Put() [E] [634ac543] Whilst putting LFS OID[f8eaf0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64]: Failed to copy to tmpPath: f8/ea/f0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64 Error: Put "https://s3.kube/gitea/lfs/f8/ea/f0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64?partNumber=11&uploadId=2~DD9lkoTTgIbOLrtJBMIvRdhPLy7rpbz": Connection closed by foreign host https://s3.kube/gitea/lfs/f8/ea/f0b805e3094682da668f1097ccd98deec35240c677c2089cb486d69f0b64?partNumber=11&uploadId=2~DD9lkoTTgIbOLrtJBMIvRdhPLy7rpbz. Retry again.

The message in Gitea obviously looks like it is a bad nginx configuration but for the life of me I cannot replicate outside of Gitea. When I upload the same files through the nginx proxy via the aws CLI everything works perfectly:

aws --profile=ceph --endpoint-url=https://s3.kube --recursive s3 cp binaries s3://datapull/tmp/                                                                                                                                         
upload: binaries/nifi-toolkit-1.18.0-bin.zip to s3://datapull/tmp/nifi-toolkit-1.18.0-bin.zip
upload: binaries/nifi-registry-1.18.0-bin.zip to s3://datapull/tmp/nifi-registry-1.18.0-bin.zip
upload: binaries/nifi-1.18.0-bin.zip to s3://datapull/tmp/nifi-1.18.0-bin.zip

Update:

As mentioned at the end of the previously shared discord URL, adding this to my client .git/config worked for me: ... SOLVED!

[http]
	postBuffer = 624288000
[lfs]
	concurrenttransfers = 10
	activitytimeout = 3600
	dialtimeout = 3600
	keepalive = 3600
	tlstimeout = 3600
normantaipei

normantaipei commented on Mar 25, 2024

@normantaipei

I fix it by changeing the ROOT_URL , SSH_DOMAIN, DOMAIN to my local ip in app.ini file.
image

stevapple

stevapple commented on Mar 25, 2024

@stevapple
Contributor

For me, the ultimate solution is to increase lfs.activitytimeout on client side.

It turns out that all other configurations from @theobjectivedad’s snippet may not be required.

CHN-beta

CHN-beta commented on May 22, 2024

@CHN-beta

I encountered the same issue when uploading an LFS object with size about 25 GB.
I use btrfs, and use beesd to dedup in the background.
I solved the issue by temporarily stopping beesd.

QrackEE

QrackEE commented on Sep 30, 2024

@QrackEE

Thanks @stevapple , that fixed the problem for me! Do we agree here that the problem is more with Git not producing a timeout error correctly (and continuous sending of 'some' data)?

stevapple

stevapple commented on Sep 30, 2024

@stevapple
Contributor

Do we agree here that the problem is more with Git not producing a timeout error correctly (and continuous sending of 'some' data)?

It‘s more of a problem of Gitea’s LFS implementation. Gitea collects the whole LFS object and then uploads it to S3. It should chunk the object and upload simultaneously instead.

1 remaining item

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @lunny@schmittlauch@QrackEE@simonwu-os@stevapple

        Issue actions

          git-lfs: Upload never terminates, although file size already exceeded · Issue #17311 · go-gitea/gitea