You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The image
property of a container supports the same syntax as the docker
command does, including private registries and tags.
The default pull policy is IfNotPresent
which causes the Kubelet to skip
pulling an image if it already exists. If you would like to always force a pull,
you can do one of the following:
imagePullPolicy
of the container to Always
.imagePullPolicy
and use :latest
as the tag for the image to use.imagePullPolicy
and the tag for the image to use.Note that you should avoid using :latest
tag, see Best Practices for Configuration for more information.
Docker CLI now supports the following command docker manifest
with sub commands like create
, annotate
and push
. These commands can be used to build and push the manifests. You can use docker manifest inspect
to view the manifest.
Please see docker documentation here: https://docs.docker.com/edge/engine/reference/commandline/manifest/
See examples on how we use this in our build harness: https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=
These commands rely on and are implemented purely on the Docker CLI. You will need to either edit the $HOME/.docker/config.json
and set experimental
key to enabled
or you can just set DOCKER_CLI_EXPERIMENTAL
environment variable to enabled
when you call the CLI commands.
Note: Please use Docker 18.06 or above, versions below that either have bugs or do not support the experimental command line option. Example https://github.com/docker/cli/issues/1135 causes problems under containerd.
If you run into trouble with uploading stale manifests, just clean up the older manifests in $HOME/.docker/manifests
to start fresh.
For Kubernetes, we have typically used images with suffix -$(ARCH)
. For backward compatibility, please generate the older images with suffixes. The idea is to generate say pause
image which has the manifest for all the arch(es) and say pause-amd64
which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
Private registries may require keys to read images from them. Credentials can be provided in several ways:
Each option is described in more detail below.
Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag).
All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s
Google service account. The service account on the instance
will have a https://www.googleapis.com/auth/devstorage.read_only
,
so it can pull from the project’s GCR, but not push.
Kubernetes has native support for the Amazon Elastic Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag
)
in the Pod definition.
All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.
The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:
ecr:GetAuthorizationToken
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:GetRepositoryPolicy
ecr:DescribeRepositories
ecr:ListImages
ecr:BatchGetImage
Requirements:
v1.2.0
or newer. (e.g. run /usr/bin/kubelet --version=true
).v1.3.0
or newer.Troubleshooting:
us-west-2
) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work?--cloud-provider=aws
.journalctl -u kubelet
) for log lines like:
aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API
aws_credentials.go:116] Got ECR credentials from ECR API for <AWS account ID for ECR>.dkr.ecr.<AWS region>.amazonaws.com
When using Azure Container Registry you can authenticate using either an admin user or a service principal. In either case, authentication is done via standard Docker authentication. These instructions assume the azure-cli command line tool.
You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.
Once you have created your container registry, you will use the following credentials to login:
DOCKER_USER
: service principal, or admin usernameDOCKER_PASSWORD
: service principal password, or admin user passwordDOCKER_REGISTRY_SERVER
: ${some-registry-name}.azurecr.io
DOCKER_EMAIL
: ${some-email-address}
Once you have those variables filled in you can configure a Kubernetes Secret and use it to deploy a Pod.
IBM Cloud Container Registry provides a multi-tenant private image registry that you can use to safely store and share your images. By default, images in your private registry are scanned by the integrated Vulnerability Advisor to detect security issues and potential vulnerabilities. Users in your IBM Cloud account can access your images, or you can use IAM roles and policies to grant access to IBM Cloud Container Registry namespaces.
To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see Getting started with IBM Cloud Container Registry.
If you are using the same account and region, you can deploy images that are stored in IBM Cloud Container Registry into the default namespace of your IBM Cloud Kubernetes Service cluster without any additional configuration, see Building containers from images. For other configuration options, see Understanding how to authorize your cluster to pull images from a registry.
Note: If you are running on Google Kubernetes Engine, there will already be a.dockercfg
on each node with credentials for Google Container Registry. You cannot use this approach.
Note: If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Note: Kubernetes as of now only supports theauths
andHttpHeaders
section of docker config. This means credential helpers (credHelpers
orcredsStore
) are not supported.
Docker stores keys for private registries in the $HOME/.dockercfg
or $HOME/.docker/config.json
file. If you put the same file
in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to setHOME=/root
explicitly in your environment file for kubelet.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
docker login [server]
for each set of credentials you want to use. This updates $HOME/.docker/config.json
.$HOME/.docker/config.json
in an editor to ensure it contains just the credentials you want to use.nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')
.docker/config.json
to one of the search paths list above.
for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done
Verify by creating a pod that uses a private image, e.g.:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
pod/private-image-test-1 created
If everything is working, then, after a few moments, you can run:
kubectl logs private-image-test-1
and see that the command outputs:
SUCCESS
If you suspect that the command failed, you can run:
kubectl describe pods/private-image-test-1 | grep 'Failed'
In case of failure, the output is similar to:
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
You must ensure all nodes in the cluster have the same .docker/config.json
. Otherwise, pods will run on
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
template needs to include the .docker/config.json
or mount a drive that contains it.
All pods will have read access to images in any private registry once private
registry keys are added to the .docker/config.json
.
Note: If you are running on Google Kubernetes Engine, there will already be a.dockercfg
on each node with credentials for Google Container Registry. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
By default, the kubelet will try to pull each image from the specified registry.
However, if the imagePullPolicy
property of the container is set to IfNotPresent
or Never
,
then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.
Note: This approach is currently the recommended approach for Google Kubernetes Engine, GCE, and any cloud-providers where node creation is automated.
Kubernetes supports specifying registry keys on a pod.
Run the following command, substituting the appropriate uppercase values:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
If you already have a Docker credentials file then, rather than using the above
command, you can import the credentials file as a Kubernetes secret.
Create a Secret based on existing Docker credentials explains how to set this up.
This is particularly useful if you are using multiple private container
registries, as kubectl create secret docker-registry
creates a Secret that will
only work with a single private registry.
Note: Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
Now, you can create pods which reference that secret by adding an imagePullSecrets
section to a pod definition.
cat <<EOF > pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
EOF
cat <<EOF >> ./kustomization.yaml
resources:
- pod.yaml
EOF
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets in a serviceAccount resource. Check Add ImagePullSecrets to a Service Account for detailed instructions.
You can use this in conjunction with a per-node .docker/config.json
. The credentials
will be merged. This approach will work on Google Kubernetes Engine.
There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions.
imagePullSecrets
.If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any imagePullSecrets
into a single virtual .docker/config.json
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.