BSA 728x90 Center Banner

How to Use Volume Drivers and Storage with New Docker Service Command

Docker 1.12 brought a few exciting features, notably swarm mode. However, this new swarm mode brought a new docker command for your containers. Gone is the days of using docker run or docker ps for managing your containers. The new command uses docker service. This makes sense as our applications are turning into individual services the need some level of availability that Swarm now manages. But with it comes some subtle changes in regards to using volumes, volume-drivers, and storage (SAN, NAS, DAS).

 

Using the typical docker run command, we would utilize volume drivers through the --volume-driver flag. 

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data postgres

 

 This is pretty easy to read and you know what it's doing. Specifying the volume-driver and then the host mount mapped to the container mount. You can also specify multiple volumes and only have to use the volume-driver flag once

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data -v pgetc:/etc postgres

 

The new docker service commands brings a few new intricacies so how does this look?

docker service create --replicas 1 --name pg --mount type=volume,source=mypgdata,target=/var/lib/postgresql/data,volume-driver=rexray postgres

Add a comment
Read more: How to Use Volume Drivers and Storage with New Docker Service Command

VMTurbo rebranding to Turbonomic

Smart move in a world where "VM" or "vSomething" branding is moving away. I'm sure VMTurbo won't be the only company (or person) to rebrand themselves by the end of 2017. The focus has shifted away from VM monitoring which has become harder and harder to differentiate. Most monitoring programs are looking for their niche, and VMTurbo isn't any different. 

 

An excerpt from the full Press Release says:

the company announced it was rebranding to become Turbonomic, the autonomic cloud platform, to reflect customers’ embrace of real-time autonomic systems that enable their application workloads to self-manage across private and public cloud environments, continuously maintaining a healthy state of performance, efficiency and agility with no manual intervention required.

 

Add a comment
Read more: VMTurbo rebranding to Turbonomic

VMTURBO VMWORLD® 2016 SWEEPSTAKES

Do you want to attend VMworld® 2016 US in Las Vegas this year, but your company won’t pay for the conference passes? Try your luck and win two full conference passes to VMworld® on VMTurbo®.

 

Let VMTurbo send you to VMworld® 2016. Enter for a chance to win two free tickets.

 

THREE DRAWINGS: MAY 27JUNE 17JULY 15

 

Sweepstakes starts May 4, 2016 and ends on July 15, 2016 at 11:59PM EST. Winners will be announced on the same day of each drawing and we will notify each winners by email.

 

 

Add a comment
Read more: VMTURBO VMWORLD® 2016 SWEEPSTAKES

Advice from 25 Experts on Getting Started with Javascript

This might be shameless self-promotion, but I was recently contacted to give a statement as an "Expert" on Javascript. The question was "What are the best methods or resources for learning Javascript?". Oddly enough, I wouldn't consider myself an expert (by no stretch) and I think my AirPair post on How to Create a Complete Express.js + Node.js + MongoDB CRUD and REST Skeleton has gained significant attention.

 

Check out what others have said in psdtowp.com's post Learn JavaScript: The best methods and resources according to 25 JavaScript experts

 

Here is some advice from myself:

 

Kendrick Coleman

Website

Kendrick Coleman

The best way to learn JavaScript is to start with a front-end programming course. This could be in a class room setting or with online courses such as TeamTreehouse.com. Learning to manipulate the DOM gives you a better understanding of how to use JavaScript to make things happen. After you have your feet wet, it's time to jump head first into Node.js. There are lots of different places to learn Node.js online and each one of them are good in their own regard. Figure out some sort of basic script you want to do first that doesn't require a web stack. This will teach you about callbacks. Once you have standard server-side scripting in your arsenal, you can move to web frameworks such as express.js, meteor.js, and more!

Add a comment
Read more: Advice from 25 Experts on Getting Started with Javascript

Latest Project with Docker Machine and RackHD

I've made some gradual progressions with my development ability. Started with web apps in Ruby on Rails to better web apps with Node.js + Express and now started to get into the systems programming using Go (golang). It's been a fun ride and I've got a few projects done in Go so far. I'm adding another notch to the belt today with the Docker Machine Driver for RackHD

 

This project wasn't necessarily challenging from a development standpoint. In fact, it was actually quite easy. Here's all the code ->  https://github.com/emccode/docker-machine-rackhd/blob/master/rackhd.go. What made this challenging was trying to interface with 2 projects that are under extremely heavy development. The first is RackHD. With nearly 40 people working on the code base, something is new or changing every few hours. Ideally, you want to keep up with the latest and greatest but at some point you just have to stop at one commit and work from there. The second portion was Go-Swagger. Again, another project under heavy development that is making use of the Go but appealing to the Swagger community to easily generate API bindings. This project is changing every few hours as well. Combine these two together and it's about 2 months of excruciating testing to make something concrete!

 

The first task was building Go API bindings and that was accomplished with gorackhd. This in itself proved something that would need updating every few weeks because the Swagger Spec is being updated every few days and new APIs were being added. I even got to find that out the hard way when I was trying to make a query lookup happen but it wasn't available (this line of code). Needless to say, I finally got to the point of being able to have a working driver after defining commits that are suitable to work. Shoutout to Schley Kutz for creating a badass Makefile for gorackhd and even fixing issues with go-swagger. 

Add a comment
Read more: Latest Project with Docker Machine and RackHD

Total Noob Guide To Move Your Old Wired Security System to SmartThings

This week I posted an update on twitter and facebook of my latest project where I took all the wired window and door sensors from my old security system and integrated them into SmartThings. Many people said they want to do the same thing and I know that my usual step-by-step spoon-fed tutorial was in order.

 

 

Add a comment
Read more: Total Noob Guide To Move Your Old Wired Security System to SmartThings

Use Travis CI to Update Your Website using FTP and Git

After learning how to build websites from scratch using HTML, CSS, and JavaScript (like bourbonpursuit.com and emccode.github.io) I have ventured out of the realm from normal CMS platforms like Wordpress and Joomla. Most of the sites I build are only a few pages and aren't big content monstrosities. I use Sublime as my local editor and use git for version control. But after I make changes and want to make the files live on the site, I have to use normal FTP methods of moving those files to the shared hosting server. This became really confusing if you made changes to lots of files. Plus, it's boring work.

 

What I ideally wanted was a way to automatically update the website from the latest changes in the master branch directly from GitHub. Of course, there are ways to update every single file on the site, but why not just use git version control to figure out what files were added or changed. There are tools out there like git-ftp that could do this, but that requires an extra step from me.

 

I had a few minutes today to finally start implementing and using a continuous integration tool. By default, I decided to use Travis CI because it has native GitHub integration. My immediate gut reaction was to use Travis' FTP file transfer utility. However, there is an issue with the curl command that requires you to specify the filename you want to upload. My use case is for Git to give Travis all the changed/added files and have only those files be uploaded.

Add a comment
Read more: Use Travis CI to Update Your Website using FTP and Git

NAKIVO Enables Converting Western Digital NAS into a VM Backup Appliance

On Monday October 12, 2015, NAKIVO released version 5.8 that can be installed directly onto a Western Digital NAS, thus creating a simple, fast, and affordable VM backup appliance, which can be used both onsite and offsite. Here are some of the details:

  • NAKIVO Backup & Replication v5.8 can be installed directly onto Western Digital My Cloud DL series NAS.
  • While NAKIVO Backup & Replication is already on par with or faster than competition in terms of backup performance, we are seeing up to 1.6X performance boost when our product is deployed directly on a Western Digital NAS. This is because backup data is written directly to NAS disks, bypassing file protocols such as NFS and CIFS.
  • NAKIVO Backup & Replication v5.8 can be deployed even on entry-level NAS devices, as the product requires just 2 CPU cores and 1 GB of RAM to be fully operational, and still deliver high backup speeds. For example, the Western Digital DL 2100 NAS with 12 TBs of storage has a list price of less than $850 which is enough for the data backup needs of a typical VMware Essentials environment.
  • When installed on a NAS, NAKIVO Backup & Replication delivers a number of benefits:
    • All-in-one VM data protection – a VM backup appliance combines backup software, data deduplication, and backup hardware in a single solution that is affordable (5X vs. competition), fast (over 1 Gbps backup), reliable (Western Digital + NAKIVO), and easy to manage. Add a comment
Read more: NAKIVO Enables Converting Western Digital NAS into a VM Backup Appliance

Docker Machine and Complete Customization

NOTE: THIS IS NOT SUPPORTED BY DOCKER OR EMC. THIS IS NOT EVEN A CONSIDERED A VERSION OF DOCKER MACHINE. THIS IS A PROJECT FLING TO PROVE THE ABILITY.

TL;DR get the Docker Machine with Extensions binary and try it yourself. Or watch the youtube video at the very bottom to see it in action.

 

I've had some fun playing with Go the past few weeks and I was able to create a very powerful customization. I present to you Docker Machine with Extensions! Using a standard template, it's possible to have a completely customized Docker Machine installation.

 

But, why is this important? 

 

Docker Machine gets you a “docker ready” host. It automatically configures the host OS to run Docker containers and can be joined to a Swarm cluster. But what about everything else that goes into daily operations? Configuration management, Docker Engine pluggable extensions, crazy security configurations, etc! Those are the things that can push Docker Machine that extra mile.

 

The EMC {code} team came up with a clever way to have generic and native “extensions” using a standard JSON file. In short, here is what a JSON file allows you to do:

  • Environment Variables: Set environment variables to /etc/environment that could be used for customization of anything
  • Copy: Specify a source and destination and it will invoke the docker-machine scp command to move files from your local host to a remote host or between remote hosts. This can be used to move binaries, transfer configuration files, etc.
  • Run: Create an ordered list of commands to run. Install packages, move files, or do anything.
  • ValidOS: Specify a range of operating systems that will work with this configuration.
Add a comment
Read more: Docker Machine and Complete Customization

AT&T Uverse Doesn't Work With Docker Hub and They Want ME To Pay To Fix It

I really didn't want to write a "hate" piece, but I've been backed into a corner... AT&T Uverse Internet DOES NOT WORK with Docker Hub. Also, AT&T wants ME TO PAY to have it fixed. Here's my story.

**UPDATE 9/25/2015**

Uverse is now working with Docker Hub. Thanks for everyone that was involved!

 

I had Uverse installed on 9/21/2015. I had been hacking on a new Docker Machine piece on my Verizon MiFi for a few days while I was at the office. Since my kitchen is being remodeled, I wanted to be there so I could monitor the progress. So, I'll just work from home. Today is 9/24/2015.

 

I finished a commit and needed to run a build. So I used my Docker Machine host to do that. I began the process and it got stuck:

kcoleman-mbp:machine kcoleman$ make
script/validate-dco
Congratulations!  All commits are properly signed with the DCO!
script/validate-gofmt
Congratulations!  All Go source files are properly formatted.
script/test
./...
Sending build context to Docker daemon 131.8 MB
Step 0 : FROM golang:1.4.2-cross
Pulling repository docker.io/library/golang
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/golang/images. You may want to check your internet connection or if you are behind a proxy.
make: *** [test] Error 1

 

What? That's weird. Well what about just trying to run a container?

kcoleman-mbp:v050-dev kcoleman$ docker run -ti busybox
Unable to find image 'busybox:latest' locally
Pulling repository docker.io/library/busybox
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/busybox/images. You may want to check your internet connection or if you are behind a proxy.

 

Maybe it's an issue with my home network since I have another router. I hopped on the Uverse Wifi directly, same thing. Disabled the entire firewall, same thing. Gave my mac IP Passthrough, same thing.

Add a comment
Read more: AT&T Uverse Doesn't Work With Docker Hub and They Want ME To Pay To Fix It

Page 1 of 32

Related Items

Related Tags