LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

CONTAINERS ARE THE FUTURE, IF … (my response)

This morning, (8/26/15) Edward Haletky of the Virtualzation Practice published the article CONTAINERS ARE THE FUTURE, IF … which needs more clarification. I was going to respond over twitter, but there was no way that was happening. So lets take some of these line by line. NOTE: this is my opinion from someone who has been in the container space for almost 2 years now.

 

 

 

 "The reasons are myriad, but there seem to be some issues with people saying that virtualization is dead (I do not agree)"

I agree too. Virtualization is the new legacy. Virtualization will be around for 10 years and longer. But it's not the new hotness.

 

"or that containers on bare metal with CoreOS, Red Hat Atomic, or some other container-built OS is the future (which is possible). Neither of these will happen unless we consider why clouds are so popular. Would a cloud give up the automation and tools it has just to go back to bare metal with containers?"

Yes they will give it up. Times and processes change. That's like saying we are supposed to treat VMs just like they were physical machines. The hypervisor is a CPU, Memory, and Management resource that must be accounted for with any operation. We can get rid of that hypervisor in time.

 

"I have yet to hear of a Docker environment being used outside of virtual machines in a multi-tenant cloud. Why? Because Docker and Docker-like containers, have no concept of tenancy."

True, 99% of cloud environments that run containers are ran on top of virtual machines for this reason. But I think you are confusing a few things. You talked about containers within "organizations". This is a service provider issue. That "service provider" could be internal IT but most shops couldn't get IaaS with VMs, why would they think multi-tenant containers are the way to do things? This technology is only 2 years old. How long did it take VMware to have a multi-tenant solution? vCloud was killed off and vCAC isn't a very good answer either. Today, multi-tenant approaches are 100% customized and will be like that for a while. Making this point pretty moot IMO.

 

 

"However, we have a good grasp of multi-tenancy with networking and virtual machines. We can assign virtual machines to tenants. We could even assign hardware to tenants, but that seems more like hosting than a public cloud service."

The network has changed. It's far beyond the VLAN. Containers are not given an IP address or a hardware NIC. You need to break out of that mindset. Read about socketplane.io and libnetwork. It's all about SDN now. Others will be playing catchup and building integrations while Docker will have a native VXLAN implementation that just works behind the scenes. Compose + Swam + multihostnetworking experimental.

 

"Security is a huge hurdle for containers."

Agree, but as my friend Tommy Trogden (@vTexan) says "Security puts the 'no' in 'innovation'".

 

"Security professionals are often not even involved in container build-out and may not be involved in the build-out of the container OS, such as CoreOS or Red Hat Atomic."

These teams build out minimal OSs to remove as much bloat and I think it would minimize security vulnerabilities by having minimal packages. But why would you attack these companies? You do know that you can build out a Docker infrastructure on RedHat, Ubuntu, Fedora, etc. So the security teams just need to hand over a linux distribution that is approved. Problem solved.

 

 "Instead, security is left to developers"

Nope. Docker has their own team of security professionals building pieces into the product. Talk to Diogo.

 

 "Granted, I think this is not a good behavior, given the number and types of breaches we are seeing, but developers are not normally security folks."

How does this change a normal application security assessment? The application hasn't changed, it's the infrastructure/platform from which it is deployed. Security people still need to test the application for vulnerabilities. 

 

 "the code needs to be secure, the container needs to be secure, the network needs to be secure, etc."

This is a good time to tell you that containers actually ENHANCE security. For instance, say the security person made a mistake and let a known vulnerability go unnoticed. Guess what, instead of having your entire box rooted, now it's only the container that is rooted. Kill the container. There, threat eliminated. Expand into applications that have RO access to data and now you have enhanced it even more. 

 

 "This does not even go into the discussion of encryption within a container or how to prevent data loss, etc."

WTF? Encryption inside the container? That's like saying my entire VM is encrypted and I have to have a PGP key to type 'ls'. Why would I need my entire container encrypted? Who has entire VMs encrypted? Hell, you're never actually supposed to be IN your container. That's not how you manage a container infrastructure. Oh and the data loss thing, look into Docker Volume drivers. Offload that stuff to a persistent datastore.

 

"How do you find out what is wrong within a container? Without good logging and good debugging tools, this becomes more than difficult: it becomes impossible."

You should listen to people like New Relic and Capital One who run Docker in production. Everyone figures out their own system but the thing they have a thing in common is STDOUT and STDERR are directed to the logging output of the host. AND YOU LOG EVERYTHING. Then you use a log forwarding tool like Heka to move those into an ELK stack, LogStash or Splunk. You can also tail the logs of a specific container with docker logs --follow containerID. This should be a good starting point.

 

"Containers need to migrate and move to other locations: locations that have different operating systems, from on-premises to clouds and between clouds"

It hurts to hear these sorts of things... Containers can move to other locations, what makes you think they can't? Save that running image and put it up in your registry then pull it down and run it elsewhere. Or build out a Dockerfile to run that thing anywhere. You need persistent data? Look at Docker Volume Drivers. Then you rely on trusty storage technologies to do that replication between sites. You want to talk NoSQL, most of that replication is built-in already between nodes. Different Operating Systems and Clouds???? REALLY!? It's a container that can run on ANY linux distribution. Have you not listened to the Docker 101 pitch? Go look at Flocker + Weave, they created a vMotion-like system for containers if you think it's necessary.

 

Without migration, we may end up re-staging and redeploying all the time into new clouds and locations instead of actually processing work

Have you ever used a container? You know they take less than 10ms to spin up, right?

 

"If we cannot deploy the underlying operating system properly, we cannot build containers. At the same time, if all we worry about is containers, then how do we build the operating system? We need a better platform into which we can deploy necessary bits as code as we deploy the containers"

I feel like I'm spoon feeding now... Docker Machine! That's not your sort of thing? Then you can go old school with Puppet or Chef

 

"We also need Testing as Code to test before deployment and to continually test after deployment."

I know you're not a AppDev person, and I don't claim to be a very good one, but that's why you run unit tests, integration tests, and build tests with Jenkins, TravisCI, CircleCI, CodeShip...need i go on??

 

"Further, we need Analytics as Code to run through all the log files, key performance indicators, and deployment results to feed back into data protection, blueprints, and other tools to ensure the investment is protected."

As said previously... many people are running these in production and do this today. Where there is heartache there is opportunity.

 

"Analytics as Code will ensure that all blueprints contain any one-off changes that are missed by those deploying and fixing all the time. Blueprints might be ignored after deployment, but that would be a bad idea in my book."

Your blueprint is the Dockerfile. Or its your development pipeline... Commit your changes in your working branch, run all tests with CI, test passes. Owner merges branch into master, master branch runs tests again, when passes it talks to your container orchestration engine to perform a rolling upgrade by grabbing the dockerfile which grabs the latest bits from your git repo. I feel like i'm repeating myself now... 

 

"As you can see, for containers to really replace current virtualization and cloud systems, they need more tools, more capability, and most of all, more security"

Disagree. Different tools for different jobs. You aren't going to run SAP or Exchange in containers. And many of these cutting edge companies use containers for flexibility and suffer the pain along the way. I don't think you have done enough research into the Docker Ecosystem Mindmap. There are LOTS of players that address many of your concerns. Many companies look at new initiatives as a way to move into containers (not taking existing things running in containers and moving them). You need the right application and the right team to make it a success. There will be failures along the way but thats what makes this space so exciting.

 

"Multi-tenancy is crucial, as is migration."

Nope... it's not crucial for success. I've already explained earlier why migration made sense when applications relied on the underlying infrastructure for resiliency. Now that is being built into the application itself. Commodity is the new black (or whitebox, however you want to spin it).

 

"The team that develops containerized code should involve security, data protection, compliance, etc. It should break down all barriers, so that the organization can remain agile while keeping itself safe."

Do you realize you are describing a pillar of DevOps? And there is no such thing as "containerized code". It's just code. That code (if following an open language like python, java, ruby, go, erlang, javacript, etc) can run on a server or container. Nothing is done special for the container itself. It's just a deployment and infrastructure tool.

 

Ok... that was long winded. Aren't you glad I didn't do that over twitter? 

Related Items

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less