I know how everyone loves to take pictures of slides, but here are my slides from my presentation at ContainerCon North America 2016 titled Highly Available and Distributed Containers. The premise of the talk was to examine the history and the fast pace of the Docker projects. Technology that is older than 6 months to a year is now considered "legacy". Using various forms of analogies we can see how complexity and ease of use have a correlation. As complexity of a technology increases, the easier it becomes to use. All of that was shown through a process of demos that will be seen on some follow-up blog posts over the next week.Add a comment
Docker 1.12 brought a few exciting features, notably swarm mode. However, this new swarm mode brought a new docker command for your containers. Gone is the days of using
docker run or
docker ps for managing your containers. The new command uses
docker service. This makes sense as our applications are turning into individual services the need some level of availability that Swarm now manages. But with it comes some subtle changes in regards to using volumes, volume-drivers, and storage (SAN, NAS, DAS).
Using the typical
docker run command, we would utilize volume drivers through the
docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data postgres
This is pretty easy to read and you know what it's doing. Specifying the volume-driver and then the host mount mapped to the container mount. You can also specify multiple volumes and only have to use the volume-driver flag once
docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data -v pgetc:/etc postgres
The new docker service commands brings a few new intricacies so how does this look?
Add a comment
docker service create --replicas 1 --name pg --mount type=volume,source=mypgdata,target=/var/lib/postgresql/data,volume-driver=rexray postgres
Smart move in a world where "VM" or "vSomething" branding is moving away. I'm sure VMTurbo won't be the only company (or person) to rebrand themselves by the end of 2017. The focus has shifted away from VM monitoring which has become harder and harder to differentiate. Most monitoring programs are looking for their niche, and VMTurbo isn't any different.
An excerpt from the full Press Release says:
the company announced it was rebranding to become Turbonomic, the autonomic cloud platform, to reflect customers’ embrace of real-time autonomic systems that enable their application workloads to self-manage across private and public cloud environments, continuously maintaining a healthy state of performance, efficiency and agility with no manual intervention required.
Add a comment
Do you want to attend VMworld® 2016 US in Las Vegas this year, but your company won’t pay for the conference passes? Try your luck and win two full conference passes to VMworld® on VMTurbo®.
Let VMTurbo send you to VMworld® 2016. Enter for a chance to win two free tickets.
THREE DRAWINGS:, ,
Sweepstakes starts May 4, 2016 and ends on. Winners will be announced on the same day of each drawing and we will notify each winners by email.
Add a comment
Here is some advice from myself:
I've made some gradual progressions with my development ability. Started with web apps in Ruby on Rails to better web apps with Node.js + Express and now started to get into the systems programming using Go (golang). It's been a fun ride and I've got a few projects done in Go so far. I'm adding another notch to the belt today with the Docker Machine Driver for RackHD.
This project wasn't necessarily challenging from a development standpoint. In fact, it was actually quite easy. Here's all the code -> https://github.com/emccode/docker-machine-rackhd/blob/master/rackhd.go. What made this challenging was trying to interface with 2 projects that are under extremely heavy development. The first is RackHD. With nearly 40 people working on the code base, something is new or changing every few hours. Ideally, you want to keep up with the latest and greatest but at some point you just have to stop at one commit and work from there. The second portion was Go-Swagger. Again, another project under heavy development that is making use of the Go but appealing to the Swagger community to easily generate API bindings. This project is changing every few hours as well. Combine these two together and it's about 2 months of excruciating testing to make something concrete!
The first task was building Go API bindings and that was accomplished with gorackhd. This in itself proved something that would need updating every few weeks because the Swagger Spec is being updated every few days and new APIs were being added. I even got to find that out the hard way when I was trying to make a query lookup happen but it wasn't available (this line of code). Needless to say, I finally got to the point of being able to have a working driver after defining commits that are suitable to work. Shoutout to Schley Kutz for creating a badass Makefile for gorackhd and even fixing issues with go-swagger.Add a comment
This week I posted an update on twitter and facebook of my latest project where I took all the wired window and door sensors from my old security system and integrated them into SmartThings. Many people said they want to do the same thing and I know that my usual step-by-step spoon-fed tutorial was in order.
Add a comment
What I ideally wanted was a way to automatically update the website from the latest changes in the master branch directly from GitHub. Of course, there are ways to update every single file on the site, but why not just use git version control to figure out what files were added or changed. There are tools out there like git-ftp that could do this, but that requires an extra step from me.
I had a few minutes today to finally start implementing and using a continuous integration tool. By default, I decided to use Travis CI because it has native GitHub integration. My immediate gut reaction was to use Travis' FTP file transfer utility. However, there is an issue with the curl command that requires you to specify the filename you want to upload. My use case is for Git to give Travis all the changed/added files and have only those files be uploaded.Add a comment
On Monday October 12, 2015, NAKIVO released version 5.8 that can be installed directly onto a Western Digital NAS, thus creating a simple, fast, and affordable VM backup appliance, which can be used both onsite and offsite. Here are some of the details:
- NAKIVO Backup & Replication v5.8 can be installed directly onto Western Digital My Cloud DL series NAS.
- While NAKIVO Backup & Replication is already on par with or faster than competition in terms of backup performance, we are seeing up to 1.6X performance boost when our product is deployed directly on a Western Digital NAS. This is because backup data is written directly to NAS disks, bypassing file protocols such as NFS and CIFS.
- NAKIVO Backup & Replication v5.8 can be deployed even on entry-level NAS devices, as the product requires just 2 CPU cores and 1 GB of RAM to be fully operational, and still deliver high backup speeds. For example, the Western Digital DL 2100 NAS with 12 TBs of storage has a list price of less than $850 which is enough for the data backup needs of a typical VMware Essentials environment.
- When installed on a NAS, NAKIVO Backup & Replication delivers a number of benefits:
- All-in-one VM data protection – a VM backup appliance combines backup software, data deduplication, and backup hardware in a single solution that is affordable (5X vs. competition), fast (over 1 Gbps backup), reliable (Western Digital + NAKIVO), and easy to manage. Add a comment
More Articles ...
- Docker Machine and Complete Customization
- AT&T Uverse Doesn't Work With Docker Hub and They Want ME To Pay To Fix It
- CONTAINERS ARE THE FUTURE, IF … (my response)
- ContainerCon 2015 Slides
- Make The Embedded Libsyn Podcast Player Responsive
- Watch Out! Docker is Creating a New Infrastructure Platform
- VMTurbo Has a New SaaSy Offering
- Deploy ECS with 5 Ways of Docker
Page 1 of 34