[ACCEPTED]-How do I clone a large Git repository on an unreliable connection?-git-clone

Accepted answer
Score: 21

The repository is accessible via the http protocol 6 (aka dumb protocol) here: http://anongit.freedesktop.org/git/libreoffice/core.git.

You can download 5 everything here with wget or another download 4 manager, and you'll have a clone of the 3 repository. After that, you rename the 2 directory from core.git to .git, and use the following 1 command to tell git about the remote url:

$ git remote add remote http://anongit.freedesktop.org/git/libreoffice/core.git
$ git reset --hard HEAD
Score: 17

do 'git clone --depth 100' It should grab 1 the last 100 commits

Score: 9

You can do the following:

git clone --depth 1 git@github.com:User/Project.git .
git fetch --unshallow

The first clone will 5 still be atomic, so if your connection is 4 not reliable enough to fetch the current 3 HEAD then you will have trouble.

The subsequent 2 fetch should be incremental and retryable if 1 the connection drops half-way though.

Score: 6

Increase buffer size so that git can utilize 2 your bandwidth properly. Use following commands.

git config --global core.compression 0

git config --global http.postBuffer 1048576000

git config --global http.maxRequestBuffer 100M

git clone <repo url>

Wait 1 till clone get complete.

Score: 3

The best method that I know of is to combine 9 shallow clone (--depth 1) feature with sparse checkout, that is checking out 8 only the subfolders or files that you need. (Shallow 7 cloning also implies --single-branch, which is also useful.) See 6 udondan's answer for an example.

Additionally, I use a bash 5 loop to keep retrying until finished successfully. Like 4 this:

#!/bin/bash

git init <repo_dir>
cd <repo_dir>
git remote add origin <repo_url>

# Optional step: sparse checkout
git config core.sparsecheckout true                     # <-- enable sparse checkout
echo "subdirectory/*" >> .git/info/sparse-checkout      # <-- specify files you need

# Keep pulling until successful
until $( git pull --depth=1 origin master ); do         # <-- shallow clone
    echo "Pulling git repository failed; retrying..."
done

In this way I can eventually pull large 3 repos even with slow VPN in China…

Importantly, by 2 pulling this way you will still be able 1 to push.

Score: 2

I used a my web hosting server with shell 3 access to clone it first and then used rsync 2 to copy it locally. rsync would copy only 1 remaining files when resumed.

More Related questions