BSA 728x90 Center Banner

4 Factors to Consider when Picking a PCB Design Tool

If you pick engineering design software based on the wrong criteria, you will, at best, get a product that takes more time and money to utilize than you’d otherwise spend. At worst, you’ll buy software that fails to meet your needs and gets in the way of work. Here are four factors to consider when picking a printed circuit board tool.

Maximizing the Efficiency of Your Team

The best PCB design tools maximize the efficiency of your team by automating as many tasks as possible or simplifying the process to the greatest degree. For example, software that automatically checks for electromagnetic interference and thermal problems eliminates the need for your team to do that work, too.

PCB tools that make it easy to check the design’s dimensions relative to the rest of the assembly or export designs so you can send them to your board manufacturer for initial input are preferable over those that make these steps a chore. If exporting a design so you can ensure that it will work once built is time-consuming or frustrating, you’re unlikely to do it more than once. If the process is simple, you’ll be able to run such checks more than once without a lot more work.

Add a comment
Read more: 4 Factors to Consider when Picking a PCB Design Tool

My Constraints Aren’t Your Constraints: A Lesson to Learn with Containers

After digging through the details of the hottest new technology, have you immediately thought “we need to start using this tomorrow!”? This a common pitfall I see often. Buzzwords get tossed around so frequently that you feel that you are doing things the wrong way.


Let’s take Netflix as an example. Netflix is ubiquitously known as the company that made micro-service architecture popular (or better yet, Adrian Cockroft). Netflix’s goal of bringing streaming content consisted of lot of different services but needed an adjunct way of increasing the speed at which services can be updated. Amazon’s Jeff Besos is quoted with the API mandate saying “All teams will henceforth expose their data and functionality through service interfaces.” This was done to allow any BU, from marketing to e-commerce, to communicate and collect data over these APIs and make that data externally available. However, take a step back and think about what these companies are doing. Yes, they are pinnacles of modern technology advancement and software application architecture, but one is a streaming movie service and the other is a shopping cart (2002 is when this mandate came out). If my bank has externally facing APIs that only use basic auth, I’m finding a new bank. That’s a constraint.


What about your business? Most enterprises have roots so deep it is difficult, if not impossible, to lift and shift.

Add a comment
Read more: My Constraints Aren’t Your Constraints: A Lesson to Learn with Containers

New Site Sponsor and Free Tool from Vembu

This site is known for free tools. That's really where it became popular. I'm happy to announce that Vembu, a player in the virtualization backup and recovery space is now a sponsor. Please take minute to view their website and read the press release below. Give them a try since there is a free trial and even a free version! 

CHENNAI - Feb 17, 2017 : Vembu, a rapidly evolving Backup & Disaster Recovery company, with their latest release, Vembu BDR Suite v3.7.0 has come up with a comprehensive free edition for data centers which deploy both virtual & physical environments. The free edition will be beneficial to all those who wish to try out Vembu BDR Suite in their production and testing environments without any costs.


Try the Free Trial here -


Unlike other free edition softwares available in the market, The Vembu BDR Suite Free Edition will cover up all the major features needed for the Backup & Recovery for multiple requirements of a data center. Vembu BDR Suite covers the below environments as follows:


Free VMware Backup: Vembu VMBackup Free Edition for VMware is offering backing up of unlimited VMs running on an ESXi and vCenter server without any costs involved especially for the businesses which do not have any sophisticated data protection method to protect their VMs. Supports multiple VMware transport modes like Direct SAN, HotAdd and Network based (NBD & NBDSSL). VMBackup automatically analyses and chooses the appropriate transport mode. Vembu VMBackup provides fast and flexible Recovery options and also provides recovery of individual files and folder from the backed up data


Free Hyper-V Backup: Vembu VMBackup Free Edition for Hyper-V is built to overcome the complexities in creating backup policies for VM’s running on Hyper-V server. Vembu has developed its own proprietary driver to backup the Hyper-V VMs in an efficient manner especially with up to 5X improvement in performance over other backup software. VMBackup supports the VMs located in Hyper-V Cluster Shared Volumes and the Windows SMB share. Hyper-V backups take consistent snapshots application-specific highly transactional applications like Exchange Server, SQL Server using Microsoft VSS writer and truncate the transaction log files during the backup job. Vembu VMBackup free edition for Hyper-V is designed to protect the Microsoft’s server both at the Host level and VM level.


Free Windows Server Backup: Vembu was providing a free edition of its software only for workstations like Desktops & Laptops, but now it has been now extended to Windows Servers as well. Vembu ImageBackup Free Edition is a Backup and Disaster Recovery solution for physical Windows environment. It will backup entire disk image of Windows Servers, Desktops and Laptops including operating system, applications and files. Also, Vembu ImageBackup helps in migrating the windows machine from the physical environment to virtual environments like VMware or Hyper-V (P2V).


Add a comment
Read more: New Site Sponsor and Free Tool from Vembu

Building a Private Cloud with Containers: The Learning Curve

Within the IT community and outside of it there is growing interest in private and hybrid cloud architectures. Many organizations are considering, or already building, a virtualized infrastructure to achieve something like a public cloud on Azure or AWS, only on-premises using in-house resources.


In the days when VMware was the go-to virtualization technology, vRealize/vCloud was the obvious choice for orchestrating a private cloud, and later, it was OpenStack. Today, with the growing hype around container technologies, you would very likely consider using Kubernetes and Docker to set up a private cloud.


The discussion about whether to use virtual machines or containers has been going on for quite some time. Many questions have been raised regarding container setup and management, security, or about which applications are a better fit for containerization. More specifically, Robert Eriksson asks how complex Docker really is, while Kiran Oliver wonders whether Kubernetes is where is gets tricky – check out my recent post in which I show that Kubernetes isn't as difficult as it used to be.

Add a comment
Read more: Building a Private Cloud with Containers: The Learning Curve

Is 2017 The Year for Kubernetes?

The container space is full of leap frogging technology and it seems impossible to keep up with the pace. Only 2 years ago, Kubernetes was starting to get attention. Compared to the other solutions on the market, it was trailing in a distant 3rd place. It wasn’t stable and had a large learning curve, especially as containers themselves were already part of the learning curve.


However, this week in Seattle marks the final KubeCon as it transitions to Cloud Native Con on in 2017. The conference is oversold and packed tighter than a can of sardines. 7 months ago if you would have asked me how Kubernetes stacked up, I would have said that it doesn’t have a fighting chance. About 4 months ago, customers were asking the {code} team for integrations into Kubernetes so we can stay a part of the larger conversation. With a bit of hacking, Clint Kitson was able to develop a POC with REX-Ray and Kubernetes over a weekend. It all started becoming very real about 2 months ago when we realized that 75% of our customer interactions were all focusing on Kubernetes over competing technologies. 


What changed? Honestly, I don’t know. Perhaps the deployment, configuration, and architecture had stabilized. Did the technology leapfrog what others had to offer? Is the idea of Google being the core contributor the biggest selling point? Is everyone in love with Kelsey Hightower? Or maybe it was a combination of all that with community involvement. 

Add a comment
Read more: Is 2017 The Year for Kubernetes?

How to Use Volume Drivers and Storage with New Docker Service Command

Docker 1.12 brought a few exciting features, notably swarm mode. However, this new swarm mode brought a new docker command for your containers. Gone is the days of using docker run or docker ps for managing your containers. The new command uses docker service. This makes sense as our applications are turning into individual services the need some level of availability that Swarm now manages. But with it comes some subtle changes in regards to using volumes, volume-drivers, and storage (SAN, NAS, DAS).


Using the typical docker run command, we would utilize volume drivers through the --volume-driver flag. 

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data postgres


 This is pretty easy to read and you know what it's doing. Specifying the volume-driver and then the host mount mapped to the container mount. You can also specify multiple volumes and only have to use the volume-driver flag once

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data -v pgetc:/etc postgres


The new docker service commands brings a few new intricacies so how does this look?

docker service create --replicas 1 --name pg --mount type=volume,source=mypgdata,target=/var/lib/postgresql/data,volume-driver=rexray postgres

Add a comment
Read more: How to Use Volume Drivers and Storage with New Docker Service Command

VMTurbo rebranding to Turbonomic

Smart move in a world where "VM" or "vSomething" branding is moving away. I'm sure VMTurbo won't be the only company (or person) to rebrand themselves by the end of 2017. The focus has shifted away from VM monitoring which has become harder and harder to differentiate. Most monitoring programs are looking for their niche, and VMTurbo isn't any different. 


An excerpt from the full Press Release says:

the company announced it was rebranding to become Turbonomic, the autonomic cloud platform, to reflect customers’ embrace of real-time autonomic systems that enable their application workloads to self-manage across private and public cloud environments, continuously maintaining a healthy state of performance, efficiency and agility with no manual intervention required.


Add a comment
Read more: VMTurbo rebranding to Turbonomic


Do you want to attend VMworld® 2016 US in Las Vegas this year, but your company won’t pay for the conference passes? Try your luck and win two full conference passes to VMworld® on VMTurbo®.


Let VMTurbo send you to VMworld® 2016. Enter for a chance to win two free tickets.




Sweepstakes starts May 4, 2016 and ends on July 15, 2016 at 11:59PM EST. Winners will be announced on the same day of each drawing and we will notify each winners by email.



Add a comment

Advice from 25 Experts on Getting Started with Javascript

This might be shameless self-promotion, but I was recently contacted to give a statement as an "Expert" on Javascript. The question was "What are the best methods or resources for learning Javascript?". Oddly enough, I wouldn't consider myself an expert (by no stretch) and I think my AirPair post on How to Create a Complete Express.js + Node.js + MongoDB CRUD and REST Skeleton has gained significant attention.


Check out what others have said in's post Learn JavaScript: The best methods and resources according to 25 JavaScript experts


Here is some advice from myself:


Kendrick Coleman


Kendrick Coleman

The best way to learn JavaScript is to start with a front-end programming course. This could be in a class room setting or with online courses such as Learning to manipulate the DOM gives you a better understanding of how to use JavaScript to make things happen. After you have your feet wet, it's time to jump head first into Node.js. There are lots of different places to learn Node.js online and each one of them are good in their own regard. Figure out some sort of basic script you want to do first that doesn't require a web stack. This will teach you about callbacks. Once you have standard server-side scripting in your arsenal, you can move to web frameworks such as express.js, meteor.js, and more!

Add a comment
Read more: Advice from 25 Experts on Getting Started with Javascript

Latest Project with Docker Machine and RackHD

I've made some gradual progressions with my development ability. Started with web apps in Ruby on Rails to better web apps with Node.js + Express and now started to get into the systems programming using Go (golang). It's been a fun ride and I've got a few projects done in Go so far. I'm adding another notch to the belt today with the Docker Machine Driver for RackHD


This project wasn't necessarily challenging from a development standpoint. In fact, it was actually quite easy. Here's all the code -> What made this challenging was trying to interface with 2 projects that are under extremely heavy development. The first is RackHD. With nearly 40 people working on the code base, something is new or changing every few hours. Ideally, you want to keep up with the latest and greatest but at some point you just have to stop at one commit and work from there. The second portion was Go-Swagger. Again, another project under heavy development that is making use of the Go but appealing to the Swagger community to easily generate API bindings. This project is changing every few hours as well. Combine these two together and it's about 2 months of excruciating testing to make something concrete!


The first task was building Go API bindings and that was accomplished with gorackhd. This in itself proved something that would need updating every few weeks because the Swagger Spec is being updated every few days and new APIs were being added. I even got to find that out the hard way when I was trying to make a query lookup happen but it wasn't available (this line of code). Needless to say, I finally got to the point of being able to have a working driver after defining commits that are suitable to work. Shoutout to Schley Kutz for creating a badass Makefile for gorackhd and even fixing issues with go-swagger. 

Add a comment
Read more: Latest Project with Docker Machine and RackHD

Page 1 of 32

Related Items

Related Tags