Keeping your system updated with security patches is quite easy. Your distribution’s package manager like
yum will do it for you, even complete automatically, if you want.
With Docker a technology that gives an application its own filesystem became popular. Applications running in Docker containers supposed to be more secure than normal applications, since they can be completely encapsulated from the rest of the executing system. But each image brings software that your package manager does not know about, what makes updates by the traditional way impossible.
Update the container
You could update the container’s packages either when you issue the command
apt-get update (or something different) via
docker exec or automatically when you run the update command with
But this has downsides! The container will grow in size, because Docker works with layers to store files. If you change the container somehow, the old state is always available in the image itself.
Another problem is that this would defeat the images reproducibility and thereby a container’s main advantage.
Practically this method fails when you run several containers out of one image. Updating each of these containers is a waste of time and resources.
Update the image
Usually Docker images are obtained from Docker Hub, or you may have a private registry for your own applications. I’ll focus on the first. Docker Hub can have two types of images. The automatic builds (called Autobuild), that must be linked with GitHub or Bitbucket. These builds will check out the project on Docker’s CI, will build it there and push the image to the registry what makes the whole process visible to the user.
Another way is to push an image directly to the registry. The build process is not visible and these repositories doesn’t even need to be linked to a git repository. In many cases you won’t be able to rebuild the image on your own, thus you have no possibility to make any updates. These images are not trustworthy, because you can’t get a clue about what’s inside the box and you are completely dependent on the repository maintainer to get the latest security patches.
Autobuild repositories are more easy to use. If the maintainer is updating the images regularly, you’re fine and just have to
docker pull the image in the same frequency. Don’t forget to recreate the containers afterwards. When the maintainer isn’t updating the image, then you can do on your own. All you need to do is to obtain the
Dockerfile and all linked resources, run
docker build, done.
Of course this is a lot more work than just run a package manager. Keep this in mind when you decide to move an application into a container. Depending of the applications features there are different requirements for security. An application that links ports to the outside needs to be updated more often than a text editor, but in any case your images should not become too old. Always take a look on the column