Container Registries

This page contains information about locally hosting container images in a Kubernetes cluster, following best practices for security and availability.

Self-Hosted using CNCF Distribution

Install Docker

Follow https://docs.docker.com/engine/install/rhel/ for installing Docker CE on the Kubernetes Master node.

Remove existing conflicting packages and install Docker packages:

dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine \
                  podman \
                  runc
dnf -y install dnf-plugins-core
dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Enable and start the Docker service:

[root@mawenzi-01 registry]# sudo systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

Optional: Set up network proxy for the Docker daemon:

cat > /etc/docker/daemon.json <<EOF
{
  "proxies": {
    "http-proxy": "http://hpeproxy.its.hpecorp.net:80",
    "https-proxy": "http://hpeproxy.its.hpecorp.net:443",
    "no-proxy": "localhost,127.0.0.1,us.cray.com,americas.cray.com,dev.cray.com,hpc.amslabs.hpecorp.net,eag.rdlabs.hpecorp.net,github.hpe.com,jira-pro.its.hpecorp.net"
  }
}
EOF

Then, restart docker service:

[root@mawenzi-01 registry]# systemctl restart docker

# Test pulling the httpd:2 image from Docker Hub to verify the proxy configuration:
[root@mawenzi-01 ~]# docker pull httpd:2
2: Pulling from library/httpd
47a6bbb8d6d8: Pull complete
206356c42440: Pull complete
242bb693a1d3: Pull complete
7079f4501db6: Pull complete
4f4fb700ef54: Pull complete
80414dfb8f15: Pull complete
ea9cd2f459bd: Download complete
44fabeaa234c: Download complete
Digest: sha256:96b1e8f69ee3adde956e819f7a7c3e706edef7ad88a26a491734015e5c595333
Status: Downloaded newer image for httpd:2
docker.io/library/httpd:2

Self-signed TLS Certificates

On the Kubernetes Master node, create the directory for the registry data:

[root@mawenzi-01 ~]# tree components/
components/
└── registry
    ├── auth
    └── certs

3 directories, 0 files

Run the following command from the registry/ directory to generate the self-signed TLS certificate (.crt) and private key (.key) for the registry:

openssl req -x509 -newkey rsa:4096 -days 365 -nodes -sha256 -keyout certs/tls.key -out certs/tls.crt -subj "/CN=my-registry" -addext "subjectAltName = DNS:my-registry"
Example
[root@mawenzi-01 registry]# tree .
.
├── auth
└── certs
    ├── tls.crt
    └── tls.key

2 directories, 2 files

[root@mawenzi-01 registry]# cat certs/tls.crt
-----BEGIN CERTIFICATE-----
MIIFJTCCAw2gAwIBAgIUNh1mD+LnU7qA02Y/bgnecd16dlQwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAwwLbXktcmVnaXN0cnkwHhcNMjYwMzE1MTkwMTIwWhcNMjcw
MzE1MTkwMTIwWjAWMRQwEgYDVQQDDAtteS1yZWdpc3RyeTCCAiIwDQYJKoZIhvcN
AQEBBQADggIPADCCAgoCggIBAK7WuJYGle8EJLBkbpbvSLZDZ9BKRibEu7s8clmW
GZsRfDkdmFe6n/ZX3rU+PAp815id33fWY/72SSW/7g2gL0fmmNyrvkLcn+dYwxAZ
IofpTQMCToZ/Ph57cKp
...
cKkLPJ2JaFvvq0OW3SZhQI7uD0oBXsKlXA==
-----END CERTIFICATE-----

[root@mawenzi-01 registry]# cat certs/tls.key
-----BEGIN PRIVATE KEY-----
MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQCu1riWBpXvBCSw
ZG6W70i2Q2fQSkYmxLu7PHJZlhmbEXw5HZhXup/2V961PjwKfNeYnd931mP+9kkl
v+4NoC9H5pjcq75C3J/nWMMQGSKH6U0DAk6Gfz4ee3CqeNFWjkaSHVU/+NTit/Qf
LZH0yLjiVIx504ZM6lCySBVo6GSYtR51w3b/1X1Gnyt4SEGheAyQyJe1Z6QMXiPQ
LHnvNUNwTrV7y2kLNx3MHk
...
thDf95LZdcTz5Qo4WLcKCILAvasRDuTiJ3gfFew8jik09oKwPtQuCXWOTEIKL3H8
Y5FPmsq1uwKEU7lztSy/3WRh6FDd
-----END PRIVATE KEY-----

Encoded Auth Password

Now, we need to generate a basic auth password for the registry. Run the following command to create a username and password, and encode it in base64 format:

docker run --rm --entrypoint htpasswd httpd:2 -Bbn myuser mypassword > auth/htpasswd
Example
[root@mawenzi-01 registry]# cat auth/htpasswd
myuser:$2y$05$RoYTuiXSVOqz9xTBFbljwObUXXPK60XfW2RmaUC.Yf.DEMuyytzVO

At this point, we should have the following components/ directory structure:

[root@mawenzi-01 ~]# tree components/
components/
└── registry
    ├── auth
    │   └── htpasswd
    └── certs
        ├── tls.crt
        └── tls.key

3 directories, 3 files

Create a Kubernetes Secret to Mount Certificates

[root@mawenzi-01 ~]# kubectl create secret tls certs-secret --cert components/registry/certs/tls.crt --key components/registry/certs/tls.key
secret/certs-secret created

[root@mawenzi-01 ~]# kubectl get secrets
NAME           TYPE                DATA   AGE
certs-secret   kubernetes.io/tls   2      20s

[root@mawenzi-01 ~]# kubectl describe secret certs-secret
Name:         certs-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1846 bytes
tls.key:  3268 bytes

Create a Kubernetes Secret to Mount Auth Password

[root@mawenzi-01 ~]# kubectl create secret generic auth-secret --from-file components/registry/auth/htpasswd
secret/auth-secret created

[root@mawenzi-01 ~]# kubectl get secrets
NAME           TYPE                DATA   AGE
auth-secret    Opaque              1      10s
certs-secret   kubernetes.io/tls   2      3m11s

[root@mawenzi-01 auth]# kubectl describe secret auth-secret
Name:         auth-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
htpasswd:  69 bytes

Create PV and PVC for the Registry

For image storage, we’ll be using local storage on the Kubernetes Master node, and creating a PersistentVolume (PV) and PersistentVolumeClaim (PVC) to manage the storage for the registry.

Create a Partition/Filesystem on a Storage Drive

We’ll be using /dev/sdc for the registry storage. First, we need to create a partition and filesystem on the drive:

[root@mawenzi-01 ~]# lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   1.6T  0 disk
├─sda1                      8:1    0   600M  0 part
├─sda2                      8:2    0     1G  0 part
└─sda3                      8:3    0   1.6T  0 part
  ├─rl_mawenzi--01-swap   253:2    0     4G  0 lvm
  ├─rl_mawenzi--01-home   253:3    0   1.6T  0 lvm
  └─rl_mawenzi--01-root   253:4    0    70G  0 lvm
sdb                         8:16   0 372.6G  0 disk
├─sdb1                      8:17   0   600M  0 part /boot/efi
├─sdb2                      8:18   0     1G  0 part /boot
└─sdb3                      8:19   0   371G  0 part
  ├─rl_mawenzi--0100-root 253:0    0    70G  0 lvm  /
  ├─rl_mawenzi--0100-swap 253:1    0     4G  0 lvm
  └─rl_mawenzi--0100-home 253:5    0   297G  0 lvm  /home
sdc                         8:32   0   1.5T  0 disk
sdd                         8:48   0   1.5T  0 disk

# Zap the disk to remove any existing partition table:
[root@mawenzi-01 ~]# sgdisk --zap-all /dev/sdc
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.

# Format disk to create one partition that spans the entire disk:
[root@mawenzi-01 ~]# fdisk /dev/sdc
...
Command (m for help): g
Created a new GPT disklabel (GUID: 46555811-65DC-174F-8A8A-204CF6FA2390).

Command (m for help): n
Partition number (1-128, default 1): 1
First sector (2048-3125627534, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-3125627534, default 3125627534):

Created a new partition 1 of type 'Linux filesystem' and of size 1.5 TiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# Create an ext4 filesystem on the new partition:
[root@mawenzi-01 ~]# mkfs.ext4 -L registry /dev/sdc1
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done
Creating filesystem with 390703185 4k blocks and 97681408 inodes
Filesystem UUID: 28ec16d2-94e4-4118-9cf2-07a2f16d6c95
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

# Mount the new partition to a directory for the registry data:
[root@mawenzi-01 ~]# mkdir -p /mnt/registry && mount /dev/sdc1 /mnt/registry/

# Verify
[root@mawenzi-01 ~]# df -Th /mnt/registry
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdc1      ext4  1.5T   28K  1.4T   1% /mnt/registry

# Configure Persistent Mounting
[root@mawenzi-01 ~]# blkid /dev/sdc1
/dev/sdc1: LABEL="registry" UUID="28ec16d2-94e4-4118-9cf2-07a2f16d6c95" TYPE="ext4" PARTUUID="1a1ca225-cd45-f041-833e-6c54722ce619"

# Add a line to /etc/fstab using the UUID, which is more reliable than the device name.
[root@mawenzi-01 ~]# cat >> /etc/fstab <<EOF
UUID=28ec16d2-94e4-4118-9cf2-07a2f16d6c95 /mnt/registry ext4 defaults 0 2
EOF

# Reload systemd to apply the changes to /etc/fstab:
[root@mawenzi-01 ~]# systemctl daemon-reload

# Test by unmounting and remounting:
[root@mawenzi-01 ~]# umount /mnt/registry
[root@mawenzi-01 ~]# mount -a
[root@mawenzi-01 ~]# df -Th /mnt/registry
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdc1      ext4  1.5T   28K  1.4T   1% /mnt/registry

Create PV and PVC Resource Definitions

registry-volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  capacity:
    storage: 1400Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /mnt/registry

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: registry-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1400Gi
Example
# Create PV and PVC resources
[root@mawenzi-01 components]# kubectl apply -f registry-volume.yml
persistentvolume/registry-pv created
persistentvolumeclaim/registry-pvc created

# Describe resources
[root@mawenzi-01 components]# kubectl describe pv registry-pv
Name:            registry-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:
Status:          Bound
Claim:           default/registry-pvc
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1400Gi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/registry
    HostPathType:
Events:            <none>

[root@mawenzi-01 components]# kubectl describe pvc registry-pvc
Name:          registry-pvc
Namespace:     default
StorageClass:
Status:        Bound
Volume:        registry-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1400Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>

Create Deployment for the Registry

Use the following Docker registry image: https://hub.docker.com/_/registry.

Reference https://oneuptime.com/blog/post/2026-02-08-how-to-set-up-docker-registry-with-basic-auth-and-htpasswd/view to see how basic auth and TLS can be configured for the registry with htpasswd.

registry-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: registry-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: registry
  template:
    metadata:
      labels:
        app: registry
    spec:
      containers:
        - name: registry
          image: registry:3.0.0
          ports:
            - containerPort: 5000
          volumeMounts:
            - name: registry-vol
              mountPath: "/var/lib/registry"
            - name: certs-vol
              mountPath: "/certs"
              readOnly: true
            - name: auth-vol
              mountPath: "/auth"
              readOnly: true
          env:
            # See documentation for configuring authentication and TLS for the registry with htpasswd
            - name: REGISTRY_AUTH
              value: "htpasswd"
            - name: REGISTRY_AUTH_HTPASSWD_REALM
              value: "Registry Realm"
            - name: REGISTRY_AUTH_HTPASSWD_PATH
              value: "/auth/htpasswd"
            - name: REGISTRY_HTTP_TLS_CERTIFICATE
              value: "/certs/tls.crt"
            - name: REGISTRY_HTTP_TLS_KEY
              value: "/certs/tls.key"
      volumes:
        - name: registry-vol
          persistentVolumeClaim:
            claimName: registry-pvc
        - name: certs-vol
          secret:
            secretName: certs-secret
        - name: auth-vol
          secret:
            secretName: auth-secret

Apply the Deployment and verify that the registry pods are running:

[root@mawenzi-01 components]# kubectl apply -f registry-deployment.yml
deployment.apps/registry-deployment created
[root@mawenzi-01 components]# kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
registry-deployment-66546dc7bc-t7zxc   1/1     Running   0          99s   10.244.1.2   mawenzi-02   <none>           <none>
registry-deployment-66546dc7bc-w6r45   1/1     Running   0          99s   10.244.2.2   mawenzi-03   <none>           <none>

If you want, you can also check the logs of the containerd service on the Kubernetes Worker nodes where these pods came up. You should see:

...
Mar 15 14:34:52 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:52.499387057-06:00" level=info msg="PullImage \"registry:3.0.0\""
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.802120474-06:00" level=info msg="ImageCreate event name:\"docker.io/library/registry:3.0.0\" labe>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.802836281-06:00" level=info msg="stop pulling image docker.io/library/registry:3.0.0: active requ>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.803923728-06:00" level=info msg="ImageCreate event name:\"sha256:99b916d8206bcbfc54c95fb060767519>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.807729957-06:00" level=info msg="ImageCreate event name:\"docker.io/library/registry@sha256:6c566>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.809091452-06:00" level=info msg="Pulled image \"registry:3.0.0\" with image id \"sha256:99b916d82>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.809122590-06:00" level=info msg="PullImage \"registry:3.0.0\" returns image reference \"sha256:99>
Mar 15 14:34:55 mawenzi-02 containerd[118256]: time="2026-03-15T14:34:55.812111232-06:00" level=info msg="CreateContainer within sandbox \"4ee81b09d386ebcf351fdca13f8e579

Troubleshooting: If the registry pods are not coming up, check the events for the deployment and describe the pods to see if there are any issues with pulling the registry image or mounting the volumes. You might need to configure containerd proxy settings on the worker nodes to allow them to pull the registry image from Docker Hub, or you can pre-pull the image on the worker nodes using docker pull registry:3.0.0 and then restarting containerd service.

Create Service for Registry Deployment

Now, we need to create a Service of type ClusterIP to expose the registry deployment to the cluster.

registry-service.yml
apiVersion: v1
kind: Service
metadata:
  name: registry-service
spec:
  selector:
    app: registry
  ports:
    - port: 5000
      targetPort: 5000

Apply the Service and verify that it’s running:

[root@mawenzi-01 components]# kubectl apply -f registry-service.yml
service/registry-service created
[root@mawenzi-01 components]# kubectl get services -o wide
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE    SELECTOR
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP    199d   <none>
registry-service   ClusterIP   10.111.36.167   <none>        5000/TCP   8s     app=registry

[root@mawenzi-01 components]# kubectl describe service registry-service
Name:                     registry-service
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=registry
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.36.167
IPs:                      10.111.36.167
Port:                     <unset>  5000/TCP
TargetPort:               5000/TCP
Endpoints:                10.244.1.2:5000,10.244.2.2:5000
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

Note how we got a ClusterIP of 10.111.36.167 for the registry service. This is the internal IP address that other pods in the cluster can use to access the registry.

Also note how the endpoints for the service are the IP addresses of the registry pods.

Add Domain Entry for Service

Export two variables to capture the registry name and IP, we’ll use these to set entries in /etc/hosts for every node in the cluster.

[root@mawenzi-01 components]# kubectl get services
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP    199d
registry-service   ClusterIP   10.111.36.167   <none>        5000/TCP   9m8s
[root@mawenzi-01 components]# export REGISTRY_NAME=my-registry
[root@mawenzi-01 components]# export REGISTRY_IP=10.111.36.167
for node in $(kubectl get nodes --no-headers | awk '{print $1}'); do
  ssh $node "echo -e '$REGISTRY_IP $REGISTRY_NAME' >> /etc/hosts"
done

So, instead of having to rely on DNS we can just have /etc/hosts entries on each node that tie my-registry to the ClusterIP of the registry service.

Test Docker Login to Registry

Now, we’ll attempt to login via docker login:

[root@mawenzi-01 ~]# docker login my-registry:5000 -u myuser -p mypassword
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://my-registry:5000/v2/": tls: failed to verify certificate: x509: certificate signed by unknown authority

See Troubleshooting section if you run into a different networking-related issue.

This is because the registry is using a self-signed TLS certificate, and the Docker client does not trust it by default. To fix this, we can add the self-signed certificate to the Docker client’s trusted certificates on each node.

dnf install ca-certificates
cp components/registry/certs/tls.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
systemctl restart docker
cp components/registry/certs/tls.crt /etc/docker/certs.d/my-registry:5000/

Login again:

[root@mawenzi-01 auth]# docker login my-registry:5000 -u myuser -p mypassword
WARNING! Using --password via the CLI is insecure. Use --password-stdin.

WARNING! Your credentials are stored unencrypted in '/root/.docker/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded

Test Pushing an Image to the Registry

Now, we’ll test pushing an image to the registry. First, we need to tag an existing image with the registry’s address. Pull an image to re-tag:

[root@mawenzi-01 auth]# docker pull nginx:latest
latest: Pulling from library/nginx
9eef040df109: Pull complete
a9d395129dce: Pull complete
df9da45c1db2: Pull complete
79697674b897: Pull complete
75a1d70aee50: Pull complete
18a071c04bd1: Pull complete
23abb0f9ce55: Download complete
d99947bc9177: Download complete
Digest: sha256:bc45d248c4e1d1709321de61566eb2b64d4f0e32765239d66573666be7f13349
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

Re-tag it, and push it:

[root@mawenzi-01 ~]# docker tag nginx:latest my-registry:5000/mynginx:v2
[root@mawenzi-01 ~]# docker push my-registry:5000/mynginx:v1
The push refers to repository [my-registry:5000/mynginx]
9eef040df109: Pushed
75a1d70aee50: Pushed
df9da45c1db2: Pushed
18a071c04bd1: Pushed
79697674b897: Pushed
a9d395129dce: Pushed
206356c42440: Pushed
v1: digest: sha256:a6bead2c897e9e39ca1a2dbd241f96dc181c8d32adcb6201258624fb37d2c7fe size: 2290

i Info → Not all multiplatform-content is present and only the available single-platform image was pushed
         sha256:bc45d248c4e1d1709321de61566eb2b64d4f0e32765239d66573666be7f13349 -> sha256:a6bead2c897e9e39ca1a2dbd241f96dc181c8d32adcb6201258624fb37d2c7fe
Here I experienced several problems with pushing an image when running more than one instance of the registry pod. We would get a partial push, but then it would fail with: blob upload invalid - invalid secret.

I had to scale down to 1 replica in the deployment to get the push to work. There is some sort of networking issues that occurs when there are multiple registry pods running, which causes the push to fail. This is likely due to the fact that the registry pods are not able to communicate with each other properly, which is required for the push to succeed.

From here on out we’ll just use one replica for the registry deployment to avoid these issues. You can scale back up to 2 replicas after pushing your images if you want, but this is where I switched to using a StatefulSet with just 1 replica:

registry-statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: registry
  serviceName: registry-service
  minReadySeconds: 10 # by default is 0
  template:
    metadata:
      labels:
        app: registry
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: registry
          image: registry:3.0.0
          ports:
            - containerPort: 5000
          volumeMounts:
            - name: registry-vol
              mountPath: "/var/lib/registry"
            - name: certs-vol
              mountPath: "/certs"
              readOnly: true
            - name: auth-vol
              mountPath: "/auth"
              readOnly: true
          env:
            # See documentation for configuring authentication and TLS for the registry with htpasswd
            - name: REGISTRY_AUTH
              value: "htpasswd"
            - name: REGISTRY_AUTH_HTPASSWD_REALM
              value: "Registry Realm"
            - name: REGISTRY_AUTH_HTPASSWD_PATH
              value: "/auth/htpasswd"
            - name: REGISTRY_HTTP_TLS_CERTIFICATE
              value: "/certs/tls.crt"
            - name: REGISTRY_HTTP_TLS_KEY
              value: "/certs/tls.key"
            - name: REGISTRY_STORAGE_DELETE_ENABLED
              value: "true"
      volumes:
        - name: registry-vol
          persistentVolumeClaim:
            claimName: registry-pvc
        - name: certs-vol
          secret:
            secretName: certs-secret
        - name: auth-vol
          secret:
            secretName: auth-secret

Using Registry from a Kubernetes Pod

Need to create a secret for pods to use when pulling from the registry, because they’ll need to authenticate to the registry to pull images.

[root@mawenzi-01 nginx]# kubectl create secret docker-registry nginx-secret --docker-server=my-registry:5000 --docker-username=myuser --docker-password=mypassword
secret/nginx-secret created

Create a pod spec that uses the my-registry:5000/mynginx:v1 image from the registry, and references the nginx-secret for pulling the image.

nginx-pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx-pod
  name: nginx-pod
spec:
  containers:
  - image: my-registry:5000/mynginx:v1
    name: nginx-container
    resources: {}
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: nginx-secret
  restartPolicy: Always

If you’re behind the HPE proxy, then containerd will try to reach through the HPE proxy to find my-registry. Add an entry to every cluster nodes' /usr/lib/systemd/system/containerd.service, so containerd starts up with my-registry in the NO_PROXY environment variable.

[Service]
Environment="HTTP_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="HTTPS_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="FTP_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="NO_PROXY=my-registry,localhost,127.0.0.1,.us.cray.com,.hpe.com,hpc.amslabs.hpecorp.net,10.214.128.0/21,10.96.0.0/12,10.244.0.0/16"

Restart the containerd instances with systemctl daemon-reload && systemctl restart containerd after making this change.

Apply the nginx-pod.yml and verify that the pod is running:

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/nginx-pod to mawenzi-03
  Normal  Pulling    13s   kubelet            Pulling image "my-registry:5000/mynginx:v1"
  Normal  Pulled     10s   kubelet            Successfully pulled image "my-registry:5000/mynginx:v1" in 3.217s (3.217s including waiting). Image size: 62950322 bytes.
  Normal  Created    10s   kubelet            Created container: nginx-container
  Normal  Started    10s   kubelet            Started container nginx-container

This concludes the setup of a private container registry running in Kubernetes, and how to use it from both the command line and from within Kubernetes pods.

Troubleshooting

Flannel Down on Node

Double check that ALL nodes have the kube-flannel pods running and have cluster network IP addresses assigned:

[root@mawenzi-01 ~]# kubectl get all -n kube-flannel -o wide
NAME                        READY   STATUS             RESTARTS          AGE    IP               NODE         NOMINATED NODE   READINESS GATES
pod/kube-flannel-ds-4482d   0/1     CrashLoopBackOff   54194 (34s ago)   199d   10.214.134.147   mawenzi-01   <none>           <none>
pod/kube-flannel-ds-cdw8t   1/1     Running            0                 199d   10.214.130.159   mawenzi-02   <none>           <none>
pod/kube-flannel-ds-psmgg   1/1     Running            0                 199d   10.214.129.159   mawenzi-03   <none>           <none>

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE    CONTAINERS     IMAGES                               SELECTOR
daemonset.apps/kube-flannel-ds   3         3         2       3            2           <none>          199d   kube-flannel   ghcr.io/flannel-io/flannel:v0.27.2   app=flannel,k8s-app=flannel

In this example, one node is not running the kube-flannel pod and is in a CrashLoopBackOff state. This means that the node does not have cluster network connectivity, and will not be able to pull images from the registry or communicate with other pods in the cluster.

Looking at the logs for that kube-flannel pod shows:

I0315 21:27:02.236125       1 main.go:239] Created subnet manager: Kubernetes Subnet Manager - mawenzi-01
I0315 21:27:02.236135       1 main.go:242] Installing signal handlers
I0315 21:27:02.236442       1 main.go:519] Found network config - Backend type: vxlan
E0315 21:27:02.236532       1 main.go:276] Failed to check br_netfilter: stat /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory

Fix: The kube-flannel pod requires the br_netfilter kernel module to be loaded on the node. To fix this, we can load the module and restart the kube-flannel pod:

modprobe bridge
modprobe br_netfilter
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@mawenzi-01 ~]# kubectl delete -n kube-flannel pod/kube-flannel-ds-4482d
pod "kube-flannel-ds-4482d" deleted
[root@mawenzi-01 ~]# kubectl get all -n kube-flannel -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
pod/kube-flannel-ds-cdw8t   1/1     Running   0          199d   10.214.130.159   mawenzi-02   <none>           <none>
pod/kube-flannel-ds-k68tq   1/1     Running   0          7s     10.214.134.147   mawenzi-01   <none>           <none>
pod/kube-flannel-ds-psmgg   1/1     Running   0          199d   10.214.129.159   mawenzi-03   <none>           <none>

[root@mawenzi-01 ~]# ip a | grep flannel
22: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    inet 10.244.0.0/32 scope global flannel.1

Now, test by doing a docker login from one of the nodes.

Docker Login with Proxy

Might need to add the registry’s IP and domain to the no_proxy environment variable to allow the docker login command to bypass the proxy and connect directly to the registry service.

[root@mawenzi-01 ~]# export no_proxy=my-registry,localhost,127.0.0.1,hpc.amslabs.hpecorp.net,10.214.128.0/21,10.96.0.0/12,10.244.0.0/16,.hpe.com
[root@mawenzi-01 ~]# docker login my-registry:5000 -u myuser -p mypassword