Amazon AWS is awesome and everyone knows it. So how can we utilize it with vCAC?
At this point you should have finished the previous 7 steps:
We need to create some Amazon AWS IAM credentials. I think everyone in the world has an Amazon.com login. But you need to go a step further if you have never used Amazon AWS for EC2 or S3. Basically enter some credit card information so you can get rolling.
After you joined AWS you should have a dashboard that looks like this showing all the available services to you.
Go to the Security Credentials for your AWS account
And make sure you are using IAM (Identity Access Management) for your AWS keys. The screen will look like the screenshot below. Click on the Groups section.
Click on New Group, give the group a name and click continue
Select "Administrator" Access
Don't change anything in the JSON policy, click continue
Click on Create group.
Now click on Users and Click on Create a New User. Give the user a name. and click on create to make sure identity keys are being provided.
Click on "Show Security Credentials" and take this time to Download Credentials to a safe place because you can never access this secret access key ever again. DO NOT CLOSE THIS WINDOW.
Now go back to your vCAC Dashboard. The first part is storing our AWS credentials as an endpoint. Navigate to Infrastructure -> Endpoints -> Credentials. and click on New Credentials
From here create a name and a description for the access key. At the same time, copy and paste your Access and Secret Keys into the respective lines for Username & Password. Click the green check mark to save your credentials.
Now Navigate within vCAC to Infrastructure -> Endpoints -> Endpoints. Select to create a new Amazon EC2 Endpoint.
Give your EC2 Endpoint a name that identifies with the account that is being utilized. Also select the AWS credentials we created earlier. Click on OK. AWS will start fetching data.
Go back to the Amazon IAM window, close the pop-up. Add this user to the group we created.
Within the vCAC dashboard, choose our endpoint and click on Data Collection
View the status of the AWS collection here. You will be able to see if it's working or if it has failed. Refresh until you have seen a "successful" message. This could take a few minutes. Go grab a coffee.
Navigate to Infrastructure -> Blueprints -> Instance Types. This shows all the instance types gathered from AWS. You can also create a new instance type for anything else you would like to add to the catalog as well.
To assign these resources to be available, we need to add them to a Fabric Group. Remember, Fabric administrators are responsible for creating reservations on the compute resources in their groups to allocate fabric to specific business groups. Fabric groups are created in a specific tenant, but their resources can be made available to users who belong to business groups in all tenants. So edit the existing Fabric Group.
Add in a few locations for AWS. Click OK
Now we can make sure collection is going. Navigate to Infrastructure -> Compute Resources, and you will see the new cloud sites being available. You can also look at the Data Collection to see how far along it has gone. This will take a few minutes per site.
Now we need to create a reservation. Navigate to Infrastructure -> Reservations. Click on create a new Reservation -> Cloud -> Amazon EC2.
Select one of the compute clouds, and give it a priority. Select the resources tab when completed
To make my security groups show up, I needed to select "Assign to a subnet in a VPC". I selected all my security groups to make things easy. Click on the green check mark to save and go tot he alerts tab
I turned the Alerts to ON and pressed OK.
Lets create a blueprint. Navigate to Infrastructure -> Blueprints -> Add a New Blueprint. Choose Cloud and Amazon EC2
Give the blueprint a name, make sure you check "Display location upon request", and select a machine prefix. Select the Build Information tab when complete. I'm going to create an Ubuntu machine in the free tier.
There will only be a single option for the Blueprint Type and Provisioning Workflow, so you can leave those as defaults. Click on the "..." next to the Amazon Machine Image. You will probably get a couple thousand pages of types. It's much easier to search through the AWS catalog and figure out the AMI type to use. I already researched that ami-fa9cf1ca is a Ubuntu Server 12.04.3 LTS (PV), EBS-backed with support available from Canonical x64 (64-bit). The Key pair is pretty important because it's what you use to connect to your provisioned AWS instance. If you already have an existing keypair, that can be used, but you can auto-generate a new key pair per business group. This means each machine provisioned in the same business group has the same key pair. If you delete the business group, its key pair is also deleted. If it's auto-generated per machine, each machine has a unique key pair. I chose to create both Micro and Small instances so there is a range of availability. No other configuration is necessary at this time, so press OK.
Now lets publish our blueprint
Now we need to add it to our catalog. Navigate to Administration -> Catalog Items. Find the Ubuntu AWS. Select configure from the drop down on the far right hand side.
I added a AWS image. Then select the Service that this will be available to. Then click update.
Now if we browse to our catalog, we will see our new item. Click on request.
Within the request screen, we need to choose an instance type. To make sure I keep it in the free zone, i keep it at micro. I also want to provision it to a subnet in a VPC. Once I select that move to the next tab of Storage.
If you need additional storage, this will create an Elastic Block Storage volume on AWS and attach it to the VM. I don't need to do this. Click on the network tab.
Choose a subnet to deploy this VM, as well as all security groups (not sure why). Once that is complete. Click on Submit
We can see our request has been submitted successfully.
Let's take a look in AWS. Eureka! We have VMs being provisioned in AWS!