Q&A: Lessons Learned from Delploying Docker into Production

Q&A: Lessons Learned from Delploying Docker into Production

Last week, I had the pleasure of hosting a webinar sharing everything from tips and tricks to challenges and watch outs from our experience deploying and running more than 4,000 Docker containers in production. I personally especially enjoyed the productive Q&A session.

In case you missed out last week, you can still watch the recorded version here. And because you all asked so many really good questions, I've compiled the answers to some of the most asked Docker questions right here.

Q: Can you recommend a good source that compares and contrasts each of the Docker tools, especially one that explains how it relates (or overlaps) other tools?

A. There is a GitHub project that describes most of the tools in the Docker ecosystem and compares them.

Q. What’s the best way for the containers to talk to each other over the network - something like Weave or Libnet?

A. Weave is pretty awesome. One of the other things included in Docker 1.7 is Core Docker Networking, which is from the guys from SocketPlane, and that's going to allow you interhost networking. Once again, it's still going to be at the user level like something like Weave -- but it's pretty smooth. Docker 1.8 is supposed to add a much better user experience around networking. I'm going to say that the improvements that Docker’s really made around networking and interhost networking are really going to start shining in the 1.8.

Q. Is Docker compose a way to define relationships (links) between Docker containers, so that we can deploy a container that contains a web service, api, database instance, and these will be auto linked (defined) through the composer file?

A. Yes. The Docker Compose file will link your containers for you. You can define those links in those environment variables inside of your Docker Compose file.

Q. What are the major differences between apt-get UPGRADE and apt-get UPDATE?

A. Apt-get upgrade is going to upgrade the entire system to the latest version of all packages that are installed. Apt-get update just goes out and makes sure that the cache and the sources where you're going to download the packages from are up-to-date. Upgrade is going to upgrade packages, update is just to make sure that the source where you’re pulling the packages from becomes the latest.

Q. In a production environment, why would you not want to upgrade apt-get packages? A. The hard part becomes -- if you start a upgrading every package inside of a container, you can get packages that are not essential to your pipeline. You can also run into issues with unprivileged containers. Your base image “should” have all of the upgraded packages already taken care of for you. I agree with you, you would want to upgrade as many packages to the latest as possible. Some folks find it easier to simply inherit from ubuntu:latest to make sure they’re getting the latest security updates.

Watch the full recording here and tweet us your questions @OnModulus #DeploymentChat.

What is Xervo?

Xervo makes deploying applications in the public cloud or your own data center easy. Node.js, PHP, Java, Python, Nginx, and MongoDB supported. Full Docker support included in Enterprise version. It’s free to get started.

Share This Article

comments powered by Disqus