Container Networks and Network Containment
by Chris Swan
At the last ONUG meeting I presented on the topic of container networks and network containment. This time around there’s going to be a chance during the ONUG Academy to get hands on and take a deeper dive into Container Networks.
If you’re a networks person and you’ve heard of Docker but not yet taken a good look at it, then this is your chance. Similarly if you’re an ops person or architect who’s interested in Docker, but hasn’t yet figured out how it relates to networks, then this is the session for you.
Containers on their own don’t have networks, but thankfully container management tools like Docker take care of creating just enough network plumbing to make applications and services running in containers accessible. The purpose of this session is to look into what happens by default, what its limitations are, and what can be done to build a more useful network to interconnect containers.
It’s often tempting to think of containers as a different sort of virtual machine (VM), but containers behave very differently from VMs, and they connect together very differently – meaning a whole new set of network considerations. Containers are very often run in VMs, which is exactly the approach that will be taken for the tutorial.
I’m hoping to be joined by Socketplane founders John Willis and Brent Salisbury, a couple of DevOps and SDN veterans who are now part of the core networking team at Docker Inc; so, there should be a lot of Docker networking expertise in the room.
What is Docker anyway?
Docker is a system for managing containers on Linux (and Windows too sometime soon) that consists of three key elements:
- Build – let’s you create containers from a simple specification called a Dockerfile. This starts with a base image (usually the operating system of your choice, or perhaps another container image that already has key dependencies). Layers can then be added that install additional software from package managers, source control systems, online resources, or local files. The Dockerfile also describes the default commands to be run when a container is launched, and which ports are used to expose network services.
- Ship – whether container images are made from a Dockerfile, or a snapshot of an existing container, it’s possible to move them around between machines, and host them in registries (which may be public or private). This makes the applications and services that reside within containers extremely portable across a variety of different environments (such as developer laptop, to private integration test suite and public cloud production).
- Run – Linux containers are made up of kernel capabilities, control groups, namespace configurations, an underlying copy on write filesystem, plus an optional policy-based access control framework. That’s a lot of moving parts that can’t reasonably be configured by hand. Docker’s libcontainer provides an API to all of that underlying complexity, and the Docker command line interface gives users a simple tool in front of that API where a single command line can be used to configure and launch a container.
What will you get from the tutorial?
My plan is to start out gently by taking a look at what Docker does to a Linux host when it’s installed, and how containers get connected to that host’s networking. Once that’s covered, we’ll take a look at how containers can be interlinked by hand, and brought together more easily with compositing tools. Of course, Docker has a bunch of options around networking, and these will be explored and experimented with; and, we will take a look at custom configuring much of the networking using scripts such as Pipework.
Casey Bisson at Joyent recently published ‘The seven characteristics of container-native infrastructure’, and here’s what he had to say about networks:
“Each container is an equal peer on the network to which it is attached, with its own IP stack independent of its particular compute host; containers must not be ghettoized in the host’s network.”
That’s not how Docker works by default, and things get interesting when trying to connect containers together across hosts. So we’ll do that, using Open vSwitch.
Finally containers can be used for layer 4-7 network application services such as TLS termination, load balancing, content caching and NIDS; so the session will conclude with a look at how that’s done.
All you need for the session is a laptop with Internet access, an AWS account, a browser that works with AWS, and an SSH client that can reach AWS VMs.
The Container Networks tutorial will take place from 9:30am-12:30pm on May 12th as a part of ONUG Academy at Columbia University. Register Now.
Chris Swan is CTO at Cohesive Networks, where he focuses on product development and product delivery. Chris was previously at UBS where he was CTO for Client Experience working on strategy and architecture for web and mobile offerings across all regions and business divisions. At UBS Chris was also co-head of Security CTO focussing on identity management, access control and data security. Chris represented UBS as a Director on the Steering Committee of the Open Data Center Alliance (ODCA), an industry association focussed on enterprise adoption of cloud computing.
Before joining UBS he was CTO at a London based technology investment banking boutique, which operated a cloud only IT platform. Chris previously held various senior R&D, architecture and engineering positions at Credit Suisse, which included networks, security, data centre automation and introduction of new application platforms. Prior to the world of financial services Chris was a Combat Systems Engineering Officer in the Royal Navy. He has an MBA from OUBS and a BEng in Electronics Engineering from the University of York.