Monday 28 December 2015

Old Phone - new Android

Old Phone - new Android


I've been increasingly frustrated by the speed at which my old Smartphone has been responding lately and wanted to find a solution before heading into the new year. The phone is a 2012 HTC One S and I'm well used to it but the usage is getting sluggish and I was wondering if it would be better to wipe it, or replace it.

I looked at a replacement M8 handset for €499, or a budget chinese smartphone for under €200 as options. One thing I was interested in however was to try to use an unofficial Android variant I'd been hearing about. I'd never tried unlocking my phone before except to use other operator sims, rather than the one I'd purchased it with. Cyanogen Mod had been one I'd come across a few times but I had NO idea how to go about installing it. Turns out it WAS as hard as I suspected!!! But next time around it will be easier and I'm keeping notes in this post to help me and anyone else out interested in the activity!

Needless to say this is unsupported by Google, the smartphone manufacturer and anyone else! The risk is poor quality phone calls, crashing phone, malfunctioning devices etc. There was only one way to find out though!

So Cyanogen has a good wiki with walk through for most main phone types. They also have an easy installer but this had been recently pulled due to a security issue. I had to go old school.

I used Easy Backup & Restore to take a backup of my contacts, messages etc. and put the backup file on the SD Card.

The main wiki page for Cyanogen is as follows:

https://wiki.cyanogenmod.org/w/Main_Page

They have forums here:

http://forum.cyanogenmod.org

There is a list of pre-compiled builds or you can make you own (!). I used the Latest Release and Recovery build for my phone as listed here:

https://download.cyanogenmod.org/?device=ville

If you can find you own smart phone in the list on the left at least you know you're not on your own! The Forum has a sub section dedicated to each phone. Read first to see what the known issues are and you may or may not encounter them but best be forewarned!!

The main install guide I used was:

https://wiki.cyanogenmod.org/w/Install_CM_for_ville

This walked me through the main steps except unlocking the HTC itself. I needed to create an account on HTCDEV for that:

http://htcdev.com/bootloader/

Now I seemed to need a Linux O/S. I tried using Ubuntu 5.10 on VMware Workstation but the USB kept dropping as "unrecognised" when the Phone was booted into "fastboot" mode. I ended up installing Ubuntu on my Intel NUC using a USB created with rufus:

https://rufus.akeo.ie

This takes the Ubuntu ISO and puts it on the USB in Bios or UEFI mode. I used UEFI and after the install, Ubuntu was up and running. I could now plug in my HTC to the NUC and start getting the tools to operate. After pointing at a local update site in Ubuntu I could install adb & fastboot:

apt-get install android-developer-adb
apt-get install android-developer-fastboot

(I think they are the commands above, Ubuntu will help you if you try to use the commands and the packages are missing).

HTC's site offered a fastboot binary as part of the unlock process and I used this instead of the downloaded one to retrieve the unlock code from the phone.

If you enable USB Debugging you can issue a command "adb reboot bootloader" and once rebooted the command "fastboot device" should list the phone. Try "fastboot oem device-info" or "fastboot oem get_identifer_token" to get the key. Put this into the HTC site (you need to be logged on and have a verified dev account) and remove any spaces and INFO words and it should email you a bin file. Give this to Linux and follow the remaining steps from the HTC dev site to finish the unlock process. Sorry, I didn't think to record the steps but it's well documented. I'm sure there's a similar process for other phones.

Now, you can use the downloaded Recovery File and once that is up and running you upload the main image and Google Apps image to your phone and choose the upgrade option for each, browse to the files, and reboot:

http://wiki.cyanogenmod.org/w/Google_Apps#Downloads

I installed the main OS, then installed the boot.img, booted into the new OS and then went back to recovery mode to install the Google Apps I'd forgotten about!

Next it's Restore time. I used Easy Backup & Recovery and the only issue was having to reselect the backup file each time for each category I wanted to restore. Doing this a few times I got back all my contacts, call history, messages etc. By installing Google Apps which gives you Google Play access I could install my favourite apps, download a new Theme from Cyanogen and away I went. Phone call quality was fine and photos also worked. The SD Card preserves a few things like my photos etc but all told it took about a day to go from 4.1.1 to 5.1.1 and I've now been able to install some Apps that wouldn't work on the older Android OS.

It's still day one so don't know how this will work out but the smartphone is more responsive now and flows between menus and apps very nicely. The new UI breathes life back into an old phone and despite the learning curve it was worthwhile and saved me some money. I'm also bloatware free which was another bonus. I had to skip updating all the crap apps I didn't want but couldn't uninstall before. Would I do this with a brand new phone? Probably not, but for an older one where the manufacturer has stopped releasing updates for over two years and you are wide open to security vulnerabilities, this may be one way to respond, while getting better control options not normally exposed to end users.

I hope this gives you an insight into the process. It may be possible to do this with Windows but I found the Ubuntu approach interesting and had the spare hardware once VMware Workstation proved unsuitable.

Thanks to everyone over at Cyanogen Mod for all their hard work and for opening up later Android versions to those of us with older handsets! They are working on CM 13 which gives Marshmallow, mine is now using 12.1 which is Lollipop. You can opt for more beta releases but obviously be prepared for bugs and stability issues.....








Thursday 17 December 2015

Installing HP Cloudsystem 9.0 - Part 9

Installing HP Cloudsystem 9.0 - Part 9

I'm using my Lab to learn about Cloudsystem 9.0 from time to time. One of the difficulties I find is that it can take up to an hour to power on the Cloudsystem 9.0 appliances by hand! Shutting them down by contrast is easy! I looked into scripting some of this using Powercli and have come up with the scripts below to accomplish most of what I'm after.

To start we'll set up the ma1 appliance to be able to use public keys when connecting to all the other appliances. Then using plink we can call the ma1 appliance from Powercli to shutdown each of the appliances or run initialization commands on startup.

Disclaimer - I'm doing this in a Lab, check with your Linux guru's & HP Support before touching Production. Use as your own risk!

Putty/SSH into cs-mgmt1 (ma1) and create a new public key:

ssh-keygen -t rsa

Press enter a few times to get the key generated (no passphrase required). Now copy the key to each of the appliances & test as you go:

ssh-copy-id cloudadmin@ua1
ssh cloudadmin@ua1  (this should log you onto ua1 without prompting for a password)
exit
ssh-copy-id cloudadmin@mona3
ssh cloudadmin@mona3
ssh-copy-id cloudadmin@mona2
ssh cloudadmin@mona2
ssh-copy-id cloudadmin@mona1
ssh cloudadmin@mona1
ssh-copy-id cloudadmin@ea3
ssh cloudadmin@ea3
ssh-copy-id cloudadmin@ea2
ssh cloudadmin@ea2
ssh-copy-id cloudadmin@ea1
ssh cloudadmin@ea1
ssh-copy-id cloudadmin@cc2
cloudadmin@cc2
ssh-copy-id cloudadmin@cc1
cloudadmin@cc1
ssh-copy-id cloudadmin@cmc
cloudadmin@cmc
ssh-copy-id cloudadmin@ma3
cloudadmin@ma3
ssh-copy-id cloudadmin@ma2
cloudadmin@ma2
ssh-copy-id cloudadmin@ma1
cloudadmin@ma1

Now grab plink.exe or the full putty installer and put it in your windows path. Launch/Relaunch PowerCLI and test the script below - update the passwords first though!

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Now for the Shutdown CS9 Script:

# Script to shutdown HP Cloudsystem 9.0 Lab - Created by Michael Russell 11-12-15
connect-viserver labvc.lab.local -username administrator -password YourVCPasswordHere
echo "shutting down ovsvapp vm on compute host"
get-vm -name ovsvapp-compute.lab.local | shutdown-vmguest -confirm:$false
echo "shutting down compute.lab.local host"
get-vmhost -name compute.lab.local | stop-vmhost -confirm:$false -force
echo "shutting down the update appliance"
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ua1 sudo shutdown -h now"
Start-sleep -s 10
echo "shutting down the monitoring appliances"
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@mona3 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@mona2 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@mona1 sudo shutdown -h now"
Start-sleep -s 10
echo "shutting down the Enterprise Appliances"
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo shutdown -h now"
Start-sleep -s 10
echo "shutting down the Cloud Controller Appliances"
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@cc2 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@cc1 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@cmc sudo shutdown -h now"
Start-sleep -s 10
echo "shutting down the Management Appliances"
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ma3 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ma2 sudo shutdown -h now"
Start-sleep -s 10
plink -ssh -l cloudadmin -pw YourPasswordHere cs-mgmt1.lab.local "sudo ssh cloudadmin@ma1 sudo shutdown -h now"
pause

So in PowerCli you execute it as "./Labshut.ps1" for instance and wait for it to complete. Replace "YourPasswordHere" with you own Lab password as set in the First Time Setup wizard.



The power up script is more complex but essentially you issue a power on VM command, set a suitable wait timer and execute various checks and you're done. You maybe have a harder time tracking issues as the os refresh generates a LOT of screen activity. Put more pause statements in if you wish, I've left mine until the very end so it's all automated.


# Script to startup HP Cloudsystem 9.0 Lab - Created by Michael Russell 17-12-15
echo "starting up ovsvapp vm on compute host"
get-vm -name ovsvapp-compute.lab.local | start-vm -confirm:$false

# Management Appliance #1
echo "starting up ma1/cs-mgmt1 vm on management host"
get-vm -name cs-mgmt1 | start-vm -confirm:$false
# cs-mgmt1 power up timings to logon 3:06
Start-sleep -s 300
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo service mysql status"

# Management Appliance #2
echo "starting up ma2/cs-mgmt2 vm on management host"
get-vm -name cs-mgmt2 | start-vm -confirm:$false
# cs-mgmt2 power up timings to logon 1:50
Start-sleep -s 300
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ma2 sudo service mysql status"

# Management Appliance #3
echo "starting up ma3/cs-mgmt3 vm on management host"
get-vm -name cs-mgmt3 | start-vm -confirm:$false
# cs-mgmt3 power up timings to logon 2:43
Start-sleep -s 300
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ma3 sudo service mysql status"

# Management Appliance Refresh
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo os-refresh-config"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ma2 sudo os-refresh-config"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ma3 sudo os-refresh-config"

# Cloud Controller #1
echo "starting up cs-cloud1 vm on management host"
get-vm -name cs-cloud1 | start-vm -confirm:$false
# cs-cloud1 power up timings to logon 3:40
Start-sleep -s 300
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cmc sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cmc sudo service mysql status"

# Cloud Controller #2 & #3
echo "starting up cs-cloud2 & cs-cloud3 vms on management host"
get-vm -name cs-cloud2 | start-vm -confirm:$false
Start-sleep -s 10
get-vm -name cs-cloud3 | start-vm -confirm:$false
# cs-cloud2 power up timings to logon 2:43
# cs-cloud3 power up timings to logon 2:47
Start-sleep -s 300
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cc1 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cc2 sudo service mysql status"

# Cloud Controller Appliance Refresh
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cmc sudo os-refresh-config"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cc1 sudo os-refresh-config"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@cc2 sudo os-refresh-config"

# Enterprise Appliance #1
echo "starting up ea1/cs-enterprise1 vm on management host"
get-vm -name cs-enterprise1 | start-vm -confirm:$false
# cs-enterprise1 power up timings to logon 12:40
Start-sleep -s 900
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo -u csauser /usr/local/hp/csa/scripts/msvc start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo service csa restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo service mpp restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea1 sudo service HPOOCentral restart"

# Enterprise Appliance #2 & #3
echo "starting up ea2/cs-enterprise2 & ea3/cs-enterprise3 vms on management host"
get-vm -name cs-enterprise2 | start-vm -confirm:$false
Start-sleep -s 10
get-vm -name cs-enterprise3 | start-vm -confirm:$false
# cs-enterprise2 power up timings to logon 11:31
# cs-enterprise3 power up timings to logon 0:46
Start-sleep -s 900
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo -u csauser /usr/local/hp/csa/scripts/msvc start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo service csa restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo service mpp restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea2 sudo service HPOOCentral restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo -u csauser /usr/local/hp/csa/scripts/msvc start"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo service csa restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo service mpp restart"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@ea3 sudo service HPOOCentral restart"

# Monitoring Appliance #1
echo "starting up mona1/cs-monitor1 vm on management host"
get-vm -name cs-monitor1 | start-vm -confirm:$false
# cs-enterprise1 power up timings to logon 16:00
Start-sleep -s 1200
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona1 sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona1 sudo service mysql status"

# Monitoring Appliance #2 & #3
echo "starting up mona2/cs-monitor2 & mona3/cs-monitor3 vms on management host"
get-vm -name cs-monitor2 | start-vm -confirm:$false
Start-sleep -s 10
get-vm -name cs-monitor3 | start-vm -confirm:$false
# cs-enterprise2 power up timings to logon 11:40
# cs-enterprise3 power up timings to logon 9:21
Start-sleep -s 900
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona2 sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona2 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona3 sudo service mysql bootstrap-pxc"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona3 sudo service mysql status"
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona1 sudo service mysql status"

echo "check output for errors and perform manual direct ssh fix if you see error: The server quit without updating PID file, or, MySQL (Percona XtraDB Cluster) is stopped. Check log."

# Update Appliance #1
echo "starting up ua1/cs-update1 vm on management host"
get-vm -name cs-update1 | start-vm -confirm:$false
# cs-enterprise1 power up timings to logon 0:25
Start-sleep -s 60
plink -ssh -l cloudadmin -pw <cloudadmin password> cs-mgmt1.lab.local "sudo ssh cloudadmin@mona1 sudo os-refresh-config"

echo "All Cloud Appliances should now have started, please check consoles for errors"
pause


Replace the <cloudadmin password> with your Lab password as set during the first time setup. You can tweak values based on your own lab performance / findings. The script above takes me 1 hour 40 minutes but you can edit the sleep timers to reduce this.....

The Consoles URL Summary is as follows:

Foundation Console:
http://192.168.10.80
(admin/<cloudadmin password>)
Kibana Activity Dashboard:
http://192.168.10.80:81/#/dashboard/file/activity
Kibana Log Dashboard:
http://192.168.10.80:81/index.html#/dashboard/file/logstash.json
Monitoring Dashboard:
http://192.168.10.80:9090/auth/login/?next=/monitoring/

HA Proxy (Health Check for Management Appliances):
http://192.168.10.80:1993

Openstack Console:
https://192.168.12.200/project/
(admin/<cloudadmin password>)

Cloud Controller HA Proxy:
http://192.168.10.81:1993

Enterprise HA Proxy:
http://192.168.10.82:1993

CSA:
https://192.168.12.201:8444/csa/login
(Admin/cloud)


Consumer CSA Marketplace Portal:
https://192.168.12.201:8089/org/CSA_CONSUMER
(consumer/cloud)

Operations Orchestration:
http://192.168.10.82:9090/oo/login/login-form
(administrator/<cloudadmin password>)

Have Fun!










Thursday 12 November 2015

Installing HP Cloudsystem 9.0 - Part 8

Installing HP Cloudsystem 9.0 - Part 8


Command Line Tools & Glance image Deployment:

My next step is to get the command line tools up and running and to upload a small image to Glance that I can use to test a few designs.

Oh! This is interesting, Glance is only available in the Linux tools package this time.....how are you meant to upload your images I wonder?!! Only HTTP locations are supported.

Well, I extracted the windows tools to a folder and created a batch file to set my environment variables as follows:

Filename: env.bat

set OS_USERNAME=Admin
set OS_PASSWORD=<Password used during setup>
set OS_TENANT_NAME=demo
set OS_AUTH_URL=https://192.168.12.200:5000/v2.0
set OS_REGION_NAME=RegionOne

Then I can run commands like:
nova --insecure list
nova --insecure hypervisor-list

+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 3  | domain-c261(Cloud)  |
+----+---------------------+

And so on. Without Glance it's going to be interesting trying to get my windows images uploaded so I'm cheating and used the Glance form CloudSystem 8.1 Tools!! I could also deploy a linux VM for the purpose or a web server but I've only used Windows in the past so I'll see how this goes.

nova --insecure service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-cert        | cc2         | internal | enabled | up    | 2015-10-09T13:46:43.000000 | -               |
| 4  | nova-conductor   | cc2         | internal | enabled | up    | 2015-10-09T13:46:43.000000 | -               |
| 7  | nova-scheduler   | cc2         | internal | enabled | up    | 2015-10-09T13:46:43.000000 | -               |
| 10 | nova-cert        | cc1         | internal | enabled | up    | 2015-10-09T13:46:51.000000 | -               |
| 13 | nova-conductor   | cc1         | internal | enabled | up    | 2015-10-09T13:46:51.000000 | -               |
| 16 | nova-scheduler   | cc1         | internal | enabled | up    | 2015-10-09T13:46:51.000000 | -               |
| 19 | nova-conductor   | cmc         | internal | enabled | up    | 2015-10-09T13:46:50.000000 | -               |
| 22 | nova-cert        | cmc         | internal | enabled | up    | 2015-10-09T13:46:50.000000 | -               |
| 25 | nova-scheduler   | cmc         | internal | enabled | up    | 2015-10-09T13:46:50.000000 | -               |
| 28 | nova-consoleauth | cmc         | internal | enabled | up    | 2015-10-09T13:46:50.000000 | -               |
| 30 | nova-compute     | Labvc-Cloud | nova     | enabled | up    | 2015-10-09T13:44:57.000000 | -               |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

My nova-compute service went down at one point, so I rebooted the Compute Host and toggled the following commands until the service came up AND the state also showed up:

nova --insecure service-enable Labvc-Cloud nova-compute
nova --insecure service-disable Labvc-Cloud nova-compute

So, let's get an instance up and running! I initially had no luck getting uploaded images to work, Everytime they deployed and hit the VMware Hypervisor they gave an error about no valid hosts. What I found when I broke down the advanced properties is that they have changed between CloudSystem 8.1 and 9.0. Undoubtedly this is because of the switch to Helion Openstack 1.1.1 and a later version of Openstack (Juno stable release 2 I think it is). Anyhow, the old Glance commands are not working so I kept trying combinations until a Cirrus image worked fine and then that indicated the advanced properties I was using with windows were no longer all valid.

So, the same procedure is used in vCenter to export an existing template into an OVF which splits out the VMDK disk we upload. Select a VMware Template and then export it as an OVF (I'm using the old C# client here):

Next wait until the export has finished and then if you examine the folder specified, a subfolder with the template name will have the files you need. 

This is the list of files:

We use Glance to upload the VMDK file and leave behind the VMX so when we configure the image it's important to add the right advanced properties so when an instance is deployed we get a well performing VM back. Make sure you use a unique image and disk name for the Glance upload - i.e. if you're re-uploading the same image more than once after a patch etc CHANGE THE DISK NAME!! You'll only experience connection reset by peer errors 10054 if you throw Glance a duplicate disk file name up, at least that's what I experienced! Here is the command I used after I changed the VMDK name:

glance --insecure image-create --name 2012R2Test --disk-format vmdk --container-format bare --file "C:\Temp\cloud\2012R2 Std Template\2012R2_rdisk1.vmdk" --is-public True --is-protected False  --progress --property vmware_ostype=windows8Server64Guest --property vmware_adaptertype=lsiLogicsas --property vmware_disktype=sparse --property hw_vif_model=e1000e

Now we have an image in Glance. If you want to check the Properties of an image do this:

glance --insecure image-show 2012R2test

And you get the same box as shown above. To update a parameter you use the image-update command:

glance --insecure image-update "Server 2008R2" --property hypervisor_type=vmware --property vmware_ostype=windows7Server64Guest --property vmware_adaptertype=lsiLogicsas --property hw_vif_model=e1000e

The only 4 values you need to set are shown above. The optional settings you may wish to tweak are:

hw_vif_model:
e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet.

vmware_adaptertype:
lsiLogic, lsiLogicsas, busLogic, ide, or paraVirtual

os_type (2008 R2 , 2012 R2):
windows7server64guest, windows8server64guest

Note: There is no VMXNET3, this appears to be due to a bug Openstack and was patched in May 2015 but this would not have been in Helion Openstack 1.1.1. This should be patched down the road.

Now let's look at the uploaded image in Foundation:

The next thing to do it to test an instance deployment. Assuming you have an activated vCenter & Cluster let's deploy an instance. I created a Windows Flavor as follows:

The instance flow is as follows:

I left all the other options at default, I've captured them here just to show them:




The instance starts spawning. Basically it's staging the Glance disk BACK to a Datastore in VMware, then it copies it and creates a linked clone to this copy. The process usually takes 20 minutes for the first VM and seconds for subsequent ones on the same Datastore. Keep a close eye on free disk space as the images take up a lot of space as you do testing of different variations. Also from time to time if you find an instance request trigger NO activity in vCenter, reboot your compute host (don't forget to shutdown the ovsvapp first and bring it up afterwards!). Or look at the Nova issue I explained earlier.
Now we have an instance booted and ready to go:


I'm not going to cover Cloudinit but this is a means to pass customization parameters to windows VMs to go beyond "build me an OS...."
https://cloudbase.it/cloudbase-init/

Now you have a instance it's time to play...!!

Update: Note to self - the OO Appliance credentials are "administrator" and the password you use during first time setup.

Sunday 1 November 2015

VMware SRM - 3PAR Certificates

VMware SRM - 3PAR Certificates


At a certain point you will upgrade the 3PAR Firmware and during one of those code releases, 3PAR certificates were introduced. The issue is that your SRM will stop working until you've rectified this. The SRA User Guide covers the TPDSrm.exe which is located somewhere under the folder C:\Program Files (x64)\VMware\VMware vCenter Site Recovery Manager\storage
The exact syntax you're going to need is as follows:

TPDSrm.exe viewcert

Take Note of the SysID for each 3PAR.

TPDSrm.exe removecert -sysid XXXXX

Do this for each 3PAR you have upgraded and want to replace the certificate on

TPDSrm.exe validatecert -sys <Hostname/IP Address of 3PAR> -user XXXX -pass XXXX

Yes to accept the certificate

Once you've done this for all the upgraded 3PARs you need to go into SRM and refresh each Array Manager and also each SRA Adapter for good measure. Now test against a simple recovery plan with one small LUN and a single VM. Do a TEST failover and then a Recovery Failover (Disaster Recovery method), followed by a ReProtect, followed by a Failback (Planning Migration method) followed by a ReProtect. Once all those work for a particular Array Pair you can be fairly certain the SRA is communicating correctly with the 3PAR. Next do a test failover for a Production Protection Group to make sure.

A Speadsheet with the following might be of use before beginning this task, particularly if you have many 3PAR's:

Datacenter X:
3PAR Hostname
3PAR IP Address
3PAR SysID
vCenter Server Name
SRM Server Name

Hope this helps you out, once you've got the 3PAR upgrade in the Certificate doesn't expire for many years so you won't have to revisit unless you replace the 3PAR.



Thursday 22 October 2015

Installing HP Cloudsystem 9.0 - Part 7

Installing HP Cloudsystem 9.0 - Part 7

This post deals with registering vCenter and activate a compute cluster so you can deploy instances and play with Openstack. 

So, browse to the Operations Console Integrated Tools section

http://192.168.10.80/#/system/integrated_tools

Enter your vCenter details and Click Register

You will see this when completed

Clicking on the entry gives the following information and also allows you to update settings later should a password change etc

Now Click on the Compute Menu section and Compute Nodes. You will see your Clusters there and I'm ready to activate my Compute Cluster which has an Intel NUC in it built for this purpose. I did reset the networking as it was previously attached to my main Distributed vSwitch, I created a second DVS and it is now joined here. See the Admin guide on Page 123 onwards for guidance.

Go to vCenter and Manage each Compute host, go to the Configuration and edit the Security Profile / Firewall / inbound configuration to enable "VM serial port connected over network".

There is also an option of enabling console access from the Openstack console via VNC on ports 5900-6105 but it's a wide range and you probably won't give users access to the Openstack Console?

The screenshot above shows how my DVS configuration is looking currently. The Management Host uses DSwitch01 and the Compute Host DSwitch02. Now, I'm really excited about getting VM segmentation without having to use NSX so here is where we exploit the Open vSwitch vApp!

Go to the folder you extracted the CloudSystem 9.0 Tools to and you will see an OVF called "cs-ovsvapp.ova". Upload this to vCenter as follows:

So, when we activate the compute cluster this OVSvApp is automatically installed on each ESXi Host.

Note: I ran into a problem locating the OVSvApp on my Management cluster, it's just a cloning problem but to resolve I rebooted my vCenter to clear the Clone and deleted the Template, reimporting the OVF directly to my Compute Cluster.

Let's see if this is true.

I've selected the Cloud Cluster where my Intel NUC Compute Node is and you get an option to Activate. Double Check your networking and the relevent section in the Admin Guide and off we go!

You can specify vnmic interfaces here and have CS9 create the DVS or as I have do it yourself and tell CS9 where to go. Click Queue for Activation and then you can Click Complete Activiations for when you're doing many at a time I guess.

Boy, they aren't taking any chances! Click Confirm Activations when ready. You will see the Cluster Activating


And you will also see if Cloning your OVSvApp!

After the Clone / relocate issue I encountered I was unable to successfully re-activate the Compute Cluster. I got errors as follows:

Beginning activation checks (Cloud(labvc.lab.local)),
Verifying that no whitespace or special characters exists in the cluster name or its datacenter name while activating cluster Cloud,
Checking if controller(s) are reachable while activating the cluster Cloud,
Verifying cluster has at least one host,
Verifying cluster has at least one shared datastore,
Verifying that the cluster is enabled with DRS,
Checking for instances on the compute Cloud(labvc.lab.local),
Error: OVSvApp installation failed:  (Exception),
Initializing OVSvApp installer configuration,
Running OVSvApp installation scripts

I also got:
Error: OVSvApp installation failed: Couldn't find the OVSvApp Template/Appliance (Exception),

I found one problem - when I dropped my compute host back to a standard vswitch and created a new DVS I forgot to check jumbo frames were enabled so my clone of the OVSvApp or anything else was getting stuck at 29%. Once that was fixed I rebooted vCenter and the Compute Host and tried again with the template attached and stored on the compute host:

Beginning activation checks (Cloud(labvc.lab.local)),
Verifying that no whitespace or special characters exists in the cluster name or its datacenter name while activating cluster Cloud,
Checking if controller(s) are reachable while activating the cluster Cloud,
Verifying cluster has at least one host,
Verifying cluster has at least one shared datastore,
Verifying that the cluster is enabled with DRS,
Checking for instances on the compute Cloud(labvc.lab.local),
Initializing OVSvApp installer configuration,
Running OVSvApp installation scripts,
Found an existing management DVS & DCM portgroup,OVSvApp has been created and configured successfully,Successfully created DVS,
Checking for VCN L2 agent status,
Deployed OVSvApp virtual machines. Status: [u' ovsvapp-compute.lab.local on the host compute.lab.local is up and running '],
Updating Host details for cluster Cloud,
Updating OVSvApp details in vCenter: labvc.lab.local,
Updating cluster details for nova.conf,
Waiting for the service to be detected by the Cloud controller,
Compute node: domain-c261(Cloud) successfully added in the list of hypervisors,
Ending Activation (Cloud(labvc.lab.local))

Phew!! I now have a slightly different network configuration, the OVSvApp has 4 NICs, 3 news ones on DSwitch02 but it also creates a new DVS with no physical uplinks?!

Now the NUC only has 2 x pCPU and the OVSvApp requires the following hardware:
I might try to edit the vCPU and reduce it to 2....Done and it appears to be ok...

The Activated Compute Nodes looks like this now in the Operations Console:

Tenant Networks are created in the Trunk DVS which then allows the OVSvApp to control who they can talk to and how. The 4 OVSvApp nics use the VMXNET3 driver which means no 1Gb E1000 bottlenecks! Lovely! Just check the Load Balancing Policy on these new Port Groups to ensure you're getting the best configuration in your environment.

I also just noticed on page 16-17 in the Troubleshooting guide they recommend removing two port groups when this happens (Failed first attempt at activation):

Manually remove the switches before retrying the compute activation action.
1. Using administrator credentials, log in to vCenter.
2. Select Inventory→Networking.
3. Right-click the Cloud-Data-Trunk-<cluster_name> distributed switch and select Remove.
4. Right-click the CS-OVS-Trunk-<cluster_name> distributed switch and select Remove.
5. Retry the activation action on the ESXi compute node.

That's all for now. I'll cover the command line tools in the next post but after that I'm stuck. I've been having difficulty deploying Windows images from Glance to ESXi. I had similar experiences using Openstack over a year ago but found ways around it. Back to the drawing board butwork is busy so it may be a while due to the labour intensiveness of booting the lab up and shutting it down, it's not a 5 minute quick check. If you find a solution post a comment!!!

Wednesday 14 October 2015

Installing HP Cloudsystem 9.0 - Part 6

Installing HP Cloudsystem 9.0 - Part 6


So, you need to allow about an hour to start up your Lab according to the Admin Guide instructions starting on page 55.

Management Appliances:

Start by powering on the Compute and Management Hosts and then the OVSVAPP VM on the Compute Host.

Next continue by powering on the ma1 appliance, in my lab it's called cs-mgmt1. If you look at the console it will stay on a screen listing network adapters etc for a few minutes, wait until this clears to a logon screen then SSH on and prep the VM as follows:

sudo -i
service mysql bootstrap-pxc
service mysql status

Power on ma2 / cs-mgmt2
Wait for the logon prompt on the console and check mysql on ma2 from ma1:
ssh cloudadmin@ma2 sudo service mysql status

Power on ma3 / cs-mgmt3
Wait for the logon prompt on the console and check mysql on ma3 from ma1:
ssh cloudadmin@ma3 sudo service mysql status

perform the following on each of the VMs in turn starting with ma1, then ma2, finally ma3:
os-refresh-config
This spins through hundreds of pages of checks, takes 2-3 minutes and ends with this:

Wait until each is completed before doing the next appliance:
os-refresh-config
ssh cloudadmin@ma2 sudo os-refresh-config
ssh cloudadmin@ma3 sudo os-refresh-config

The Healthcheck involves logging into the Management Portal, I use the VIP link as follows:
http://192.168.10.80
Then go to General, Monitoring & Launch Monitoring Dashboard
(I get the odd error "Unable to list alarms - connection aborted, etc....)
This didn't show any useful information at this point
The HA proxy shows all up in green except monasca:
http://192.168.10.80:1993

The Cloud Controllers are next:

Power on cmc or in my case cs-cloud1
Wait a few minutes for it to get to a logon prompt
Go back to the ma1 / cs-mgmt1 appliance and execute the following commands:
ssh cloudadmin@cmc sudo service mysql bootstrap-pxc
ssh cloudadmin@cmc sudo service mysql status

Power on cc1 & cc2 and wait for a logon prompt
(Hit enter as some messages may make it appear to be paused but it's actually ready)
from ma1 run the following commands on the cc1 & cc2 appliances:
ssh cloudadmin@cc1 sudo service mysql status
ssh cloudadmin@cc2 sudo service mysql status

Now perform an os-refresh-config on each appliance:
ssh cloudadmin@cmc sudo os-refresh-config
ssh cloudadmin@cc1 sudo os-refresh-config
ssh cloudadmin@cc2 sudo os-refresh-config
The os-refresh-config completes much faster than it did for the management appliances

Log into the Openstack Console
https://192.168.12.200/admin/info/
User: admin, <password as specified during install>
Check the Admin\System\System Information on the left and inspect each of the sections: Services, Compute Services, Block Storage Services and Network Agents to ensure all are Enabled and operating
The Cloud Controller HA Proxy should also be checked:
http://192.168.10.81:1993

Now for the Enterprise Appliances:

Power on ea1 / cs-enterprise1
Wait for a logon prompt (and yes, it DOES reference a cattleprod!!!)

ssh cloudadmin@ea1
sudo -i
service mysql bootstrap-pxc
service mysql status
sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start
sudo -u csauser /usr/local/hp/csa/scripts/msvc start
service csa restart
service mpp restart
service HPOOCentral restart
exit
exit

Power on ea2 / cs-enterprise2 & wait for the cattleprod / logon prompt
Power on ea3 / cs-enterprise3 & wait for the cattleprod / logon prompt

ssh cloudadmin@ea2
sudo -i
service mysql status
sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start
sudo -u csauser /usr/local/hp/csa/scripts/msvc start
service csa restart
service mpp restart
service HPOOCentral restart
exit
exit

ssh cloudadmin@ea3
sudo -i
service mysql status
sudo -u csauser /usr/local/hp/csa/scripts/elasticsearch start
sudo -u csauser /usr/local/hp/csa/scripts/msvc start
service csa restart
service mpp restart
service HPOOCentral restart
exit
exit

At this stage there is no useful information in the Openstack user dashboard or the Operations Console. The HA Proxy appears to be best at this stage:
http://192.168.10.82:1993/

In one scenario my ea2 showed CSA errors above so I ran through the steps above again on that appliance which resolved them and all turned green!

The Co-Stop only blipped to 2/4/6 Milliseconds a few times over this process so I'm happy this is less stressful on the environment!

You can check the Enterprise interfaces such as Marketplace, CSA & OO if you wish here.

Next are the monitoring appliances:

Power on mona1 / cs-monitor1 and wait for the cattleprod / logon screen

Now, I had trouble in one instance shutting down these appliances so on power up I saw a LOT of messages and thought it was shafted. I've since noticed that even after a clean power down it's not happy! You typically notice mona1 cycling through "monasca-notification main process ended messages" a LOT. You may spot a logon prompt on the others hidden around the messages. There is a fix below but run through the mysql steps below first anyway as SSH does work despite the warnings.

On the ma1 appliance connect to mona1:

ssh cloudadmin@mona1
sudo -i
service mysql bootstrap-pxc
service mysql status
exit
exit

Power on mona2 / cs-monitor2 & wait for a logon prompt
Power on mona3 / cs-monitor3 & wait for a logon prompt
ssh cloudadmin@mona2 sudo service mysql status
ssh cloudadmin@mona3 sudo service mysql status


Start the Update Appliance:

Power on ua1 / cs-update1
from the ma1 appliance ssh to it
ssh cloudadmin@ua1 sudo os-refresh-config

That's it - you should be getting health information on the Operations Console & Openstack Console.

Monitor Appliances Fix:

My 3 monitoring appliances are constantly cycling through a pair of errors:

monasca-api main process ended, respawning
monasca-monitoring main process ended, respawning

I've tried power cycling, and bringing up either the first or third one on it's own to no avail. I got pointed by more knowledgeable colleagues to the Admin guide pages 61 & 62 - "Unexpected Shutdown recovery options".

So, even though the console is going Nuts (!) you can still SSH onto these appliances and see what's wrong.

ssh cloudadmin@mona1
sudo -i
cat /mnt/state/var/lib/mysql/grastate.dat
Check the seqno for it's value, mine on all 3 appliances were -1 but I ran the following command on mona1 anyway:
service mysql bootstrap-pxc
service mysql status
then I exited twice and ssh'd into each of the other 2 appliances and ran:
service mysql restart
service mysql status
I then restarted the 3 appliances, one at a time:
ssh cloudadmin@mona1 sudo shutdown -r now
ssh cloudadmin@mona2 sudo shutdown -r now
ssh cloudadmin@mona3 sudo shutdown -r now

So far, no difference, it was the next set of commands that fixed this for me.
ssh cloudadmin@mona1
sudo -s
export PYTHONPATH=/opt/vertica/oss/python/lib/python2.7/site-packages
su dbadmin -c 'python /opt/vertica/bin/admintools -t view_cluster -d mon';
This command firstly showed the following while mona3 was restarting:

 DB  | Host         | State
-----+--------------+--------------
 mon | 192.168.0.33 | INITIALIZING
 mon | 192.168.0.34 | INITIALIZING
 mon | 192.168.0.35 | DOWN

Then it changed to this a few moments later:

 DB  | Host | State
-----+------+-------
 mon | ALL  | DOWN

As all 3 nodes were down we need to restart Vertica from a last good known state. We copy out the vertica_admin_password="XXXXX" from /home/cloudadmin/hosts
vi /home/cloudadmin/hosts
You can just copy everything out of SSH if you're using PUTTY and paste it into Notepad and extract the exact password. Then run the following command:
su dbadmin -c 'python /opt/vertica/bin/admintools -t restart_db -d mon -e last -p
<vertica_admin_password>';
This should do the trick or there is a further command to force the issue:
su dbadmin -c 'python /opt/vertica/bin/admintools -t restart_node -s 192.168.0.33,192.168.0.34,192.168.0.35 -d
mon -p <vertica_admin_password> -F'

Hopefully you can repeat the view_cluster command used earlier to confirm the status as follows:
-----+------+-------
 mon | ALL  | UP

All good!

Monday 5 October 2015

Installing HP Cloudsystem 9.0 - Part 5

Installing HP Cloudsystem 9.0 - Part 5


The interfaces for CS9, and there are plenty of them are all highly available except for the Update Appliance so your placement and anti-alias rules in a 3 node management cluster is very important.

This is a Lab, so I'm running all mine on a single Host while the Electricity Provider Van drives around the estate trying to figure out who's causing the brownout, I'm running a Cloud Lab buddy! I'm currently drawing 141.76 watts though in case you're interested!

So, to the interfaces:

You get directed to http://192.168.10.50 which is fine except it's not the VIP for the Management Appliance, I've can access the same interface using http://192.168.10.80

Username: Admin (capital "A") and password set earlier, I don't remember your password, that's your job! 
Well, that's different! At least everything is green! The Activity, Logging and Monitoring Dashboards require additional Authentication. If prompted use user "admin"/<password set during install> for these Dashboards. 

There's a Backup & Restore Tab! 

Integrated Tools is where we'll add our vCenter in and activate the Compute Cluster Node. 

So, this is the main Console that equates to the 8.1 HP Horizon Foundation Console. With OO moved to the Enterprise Appliance with CSA, and Openstack residing on it's own Appliances, these Appliances are devoid of Openstack/OO components from what I can tell. Monasca/Kibana URLs point to the Management Appliance so it's involved in exposing the monitoring elements from what I can tell. 

Let's move on to the Openstack Portals:
This is the Classic Openstack Horizon Portal from Juno: 
https://192.168.12.200
(user: admin, password as per install)

And the interface reflects this update:

Now there is still the HP equivalent but it leads to a monitoring page:
http://192.168.10.80:9090
The logon page looks the same as the previous one but we see this after logging in:

Then if you click on the Dashboard button under Monitoring you'll be brought to Monasca which monitors the environment:

The interfaces so far are for Administrator use. We certainly don't want to show them to a customer, that's what CSA is for! A nice friendly Marketplace with the HP OO engine powering the requests sent to Openstack, the Cloud, VMware direct to anywhere you want really! I've heard it can even order Pizza! 

The interface in my Lab is accessed via the following URL:
https://192.168.12.201:8444/csa

Username: Admin
Password: cloud

This will need to be changed later of course....This is CSA 4.5 so there are improvements and changes - you can now search the properties of your subscriptions, I'll test this later. 


The default Consumer Portal is available with nothing published to it:
https://192.168.12.201:8089/org/CSA_CONSUMER

The following credentials are used:
Username: consumer
Password: cloud


Shop away! 

Now the Operations Orchestration Console is accessed here:
http://192.168.10.82:9090/oo


The following credentials are used:
Username: administrator
Password: <password specified during setup>


This is where updated content can be imported from the HP Live Network:
and for CSA:

So that's the tour - I'll activate my Compute Node next and look at deploying a VM to see how that operates. 

Powering down your Lab

Lastly I'll see how to safely power down the Lab as the last thing I want to do is to have to rebuild it from scratch each time! See Page 54 on the HP Helion CloudSystem 9.0 Administrator Guide. 

SSH into ma1, the first Management Appliance created, in my case it's called cs-mgmt1 in vCenter and is on IP 192.168.10.50
We SSH into each appliance in order and shut them down one at a time:

Shutdown the Update Appliance (192.168.10.59):
ssh cloudadmin@ua1 sudo shutdown -h now
(Say yes to trust the ssl fingerprint and enter the password used during setup)

Shutdown the Compute Section of the Cloud:
Shutdown the VMs on the Compute ESXi Host(s) via vCenter
(This includes the OVSvAPP)
Shutdown the Compute ESXi Hosts themselves via vCenter

Shutdown the Monitoring Appliances (192.168.10.33/34/35):
ssh cloudadmin@mona3 sudo shutdown -h now
ssh cloudadmin@mona2 sudo shutdown -h now
ssh cloudadmin@mona1 sudo shutdown -h now
Note: The first one of these worked but the other 2 stalled the first time I did this. Be patient, they take a little longer to shutdown but all 3 worked for me the next time around. 

Shutdown the Enterprise Appliances (192.168.10.56/55/54):
ssh cloudadmin@ea3 sudo shutdown -h now
ssh cloudadmin@ea2 sudo shutdown -h now
ssh cloudadmin@ea1 sudo shutdown -h now

Shutdown the Cloud Controller appliances (192.168.10.53/52/51):
ssh cloudadmin@cc2 sudo shutdown -h now
ssh cloudadmin@cc1 sudo shutdown -h now
ssh cloudadmin@cmc sudo shutdown -h now

Shutdown the Management appliances (192.168.10.58/57/50):
ssh cloudadmin@ma3 sudo shutdown -h now
ssh cloudadmin@ma2 sudo shutdown -h now
ssh cloudadmin@ma1 sudo shutdown -h now

Startup is covered in the same guide on pages 55-57, yes it's That Long!! There's a lot of checking MYSQL status, I mean a LOT! Otherwise it doesn't look too complicated. 













Installing HP Cloudsystem 9.0 - Part 4

Installing HP Cloudsystem 9.0 - Part 4


Next step is to browse to the new Management Appliance via the IP Given, in my case http://192.168.10.50 and you get a First-Time Installer screen. It's a few pages long. I've filled it out as follows:

Edit the Data Center Management Network at this point:
Accept and continue on:

Now comes the biggie - the Password! This little change cost me a week. I know it's in the release notes but in 2015 you would think someone would accept special characters are good for passwords by now. Here is the password restrictions from Cloudsystem 8.1:

The password should have a capital letter and a number. You should have a minimum
of 8 characters, maximum of 40 characters. It must not contain any of the following
characters: less than (<), greater than (>), semicolon (;), comma (,), double-quotation
mark ("), apostrophe ('), ampersand (&), backslash (\), slash (/), vertical bar (|), plus
sign (+), colon (:), equal sign (=), and space.

Now from Cloudsystem 9.0:

Specify a password for admin that is eight characters or less and is a combination of uppercase
and lowercase letters and numerals. Symbols and special characters found on the keyboard
are not supported.

I guess someone forgot to tell the Developer that keyboards come with special characters for a very good reason, grammar and passwords! So, MAKE SURE you choose one that is compliant or you'll get to see 9 wonderful appliances deployed before it bombs out. If you tail the cs-avm-manager.log you'll see your invalid password in plain text appear alongside an error, just for good measure! It will fail, I've seen it, thrice! I used the same password in CS8.1 no problem and most of my other appliances etc in my Lab so this was unexpected and frustrating to say the least. 

If you try to use a small size for Glance you'll get an error:
At this stage I took the precaution of migrating all the cloud template files to my largest datastore as it doesn't ask you about destinations. The new VMs will appear in the same Datastore so during build this means you need a good bit of space (300GB+)!

The settings are summarized as follows:







Click Begin Installation and you can tail the cs-avm-manager.log if you're bored:
tail -f /var/log/cs-avm-manager/cs-avm-manager.log

The following Networks are created so it will end up something like this:

So far, so easy?! It took quite a while to get to the stage where all the appliances fully deployed. You should see the following if you were tailing the logfile above: 


The Initial Setup screen show now show all the deployment progress steps as completed in Green: 

If you get this far you're nearly there. If not here are a few places to look for answers:

Installation has failed. Please review the following log files for errors. Typically any fatal errors are near the end of the log files.

/var/log/cs-avm-manager/cs-avm-manager.log - this shows the overall deployment status.

/var/log/leia/leia-monitor.log - this shows the interaction between the UI and the components that do the deployment.

/var/log/upstart/os-collect-config.log - this shows the preliminary part of the deployment process.

/var/log/pavmms/pavmms_api.log - this shows the lower-level workings of the main part of the deployment process.

Final Steps (Page 25-26, HP Helion CloudSystem 9.0 Installation Guide):

Add a Physical NIC to the new Distributed vSwitches for Object and Block Storage

Wait for the Enterprise Appliances (Yes, all 3!) to finish updating by checking the following log:
cat /var/log/upstart/os-collect-config.log
for the line "CSL Installer Finished Importing additional Capsules"
(This can take another 30 minutes to finish)

SSH to ma1, in my case this is cs-mgmt1, you can check in the vCenter view (DNS Name)
SSH from within this appliance to ea1, which is cs-enterprise1 in my lab
ssh cloudadmin@192.168.10.54  (Yes to accept ECDSA key fingerprint)
sudo -i
vi /usr/local/hp/csa/jboss-as/standalone/deployments/csa.war/WEB-INF/classes/csa.properties
Replace (Nearly at the very bottom of the file): 
OOS_URL=https://192.168.12.201:9090
With:
OOS_URL=http://192.168.0.3:9090   (Make sure you change from https to http!)
service csa restart
exit
exit

Repeat for the other two Enterprise Appliances on 192.168.10.55 & 192.168.10.56

Check the HA Proxy Status page here:
http://192.168.10.82:1993/


We'll go through the various interfaces in the next Part of this series

I've captured the settings I entered on this section of the install process below in text format to help me if I should need to repeat this at a later date:

First-Time Installer Settings used:

Network Settings: Management Trunk

Datacenter Management Network
Primary DNS: 192.168.10.10
IP Ranges: 192.168.10.51-69

Consumer Access Network
vLAN ID: 51
CIDR: 192.168.12.0/24
Default Gateway 192.168.12.254
Appliance IP Ranges: 192.168.12.10-20

Cloud Management Network
vLAN ID: 50

External Network
vLAN ID: 52

Network Settings: Storage Trunk
Block Storage Network
vLAN ID: 60
Object Storage Network
vLAN ID: 61
CIDR: 192.168.19.0/24

Network Settings: Appliance Settings

Management Appliance
DCM FQDN: csmgmt.lab.local
DCM VIP: 192.168.10.80

Cloud Controller
DCM FQDN: cmc.lab.local
DCM VIP: 192.168.10.81
CAN FQDN: cmc.dept.lab.local
CAN VIP: 192.168.12.200

Enterprise Appliance
DCM FQDN: eap.lab.local
DCM VIP: 192.168.10.82
CAN FQDN: eap.dept.lab.local
CAN VIP: 192.168.12.201

Time Settings: 192.168.10.200

Password: Watch out, no special characters, in fact why use a keyboard at all?!!