Saturday, 30 January 2016

Excel use in IT in 12 minutes!!

Excel use in IT in 12 minutes!!


I thought I'd base this post on Excel as from time to time you end up using it in IT. For consultants we generally pull it out when we're doing inventories for Datacenter Moves, or Discoveries among other things, at least on the delivery end.

I've been working on some Excel this week and started thinking about how the file was laid out and as I do, began making some improvements & changes to make life easier for myself. I thought I'd lay out my process here as it might help someone out or give them ideas on how to develop their own Excel knowledge when they have to dive into it.

Now, I'm no Excel expert but as you use it and try to figure things out you pick up a few things. Also helpful is when you get similar collateral from earlier projects from colleagues. So the longer you're in IT the more of this you have to draw on, if you can remember that it A) Exists, and B) Know where to find it!

I was asked to take part in a discovery exercise for a customer. I had two excel files from other projects, neither of which I worked on but had mature & valuable excel file designs. So once sanitized they could be of value, but which to use?!

The first was more basic, formulaes, pivot tables, lots of data and so on. The second used Macros to generate combinations of data and clean itself out when being used elsewhere. Now the Macro one looked much better, the report worksheets were enticing BUT...I don't know Macros. I could have had access to the guy who generated the original on a call but what if I run into difficulty later on? Choice one: Keep it simple. You can spend hours just trying to get something to work or just have it working. My advice - stick to a template you understand because if you don't, you're on your own. This lets you focus on the data and analysis and not to be fighting Excel all the time when you're in over your head!

So, I chose option A and am glad I did. I could still peek into Excel B for inspiration. Macros are just a way to get things done but formulae can get you there too!

I was given multiple data sources - and I needed to collate all the various worksheets to one master one that brought together key pieces of information.

Source #1: Physical inventory from last year. So this is out of date static data but a good start
Source #2: Discovery Tool output. Valid data within the last 24 hours, excellent source but as I'll explain not as easy to use
Source #3: Application Catalog. They want us to link Apps with Servers. Easy? Not so much!
Source #4: Previous Audit done a few years ago for the Datacenter. If not much has changed could be useful. We got lucky even knowing this existed due to a cracking good program manager on the customer side who had been around a long while.

Never under estimate the people issue, if you get a temporary person on the customer side for this kind of task you will probably never even find out that Source #4 or something like it exists making your job 10 times harder. Also we got a lead for paperwork that might show what servers were retired, again gold for mining to bring other data up to date.

At some point you have to cut off the number of data sources; ip ping sweeps, dns lookups etc the list goes on. Try to get at least two good data sources to correlate as one will never be enough. Two gives you better odds at finding the gaps you don't know about or to prompt questions about kit or subnets that may have been missed. Especially if it's behind a Firewall!

I ended up using the discovery data at the main source, and use the physical inventory to validate it. The discovery is not complete and only shows a fraction of the servers I expected but it's work in progress and using a graph to calculate the difference will be valuable when I speak to management about progress etc. So it will take a while for the discovery tool to get everything, also spend time tuning the output rather than doing it in Excel. We're seeing 500 application lines per server which adds up to 100,000's of excel lines and makes filtering there way too slow to be useful. Dirty in, dirty out; clean in, clean out. Get the Discovery people on your side and do the filtering of data on their side! Do you need a list of windows patches or unix device files for every server?!!

Remember, every time they find new servers you may need to reload that worksheet in your Excel file. Or ask them to supply changes ONLY. Ensure any other worksheets that reference the new data are looking at the full list, so if you start with 50 discovered objects, make sure any lookups reference 1000 lines should they end up finding that many. That way you don't need to update your formulae again. Otherwise you might start accidentally excluding new data if it flows beyond earlier guesses.

Now for the Spreadsheet - here are some headings I ended up with:

Index - list of each worksheet and hyperlinks to them with explanation as to what each is for

Goldmine - this is the worksheet where you gather all the data to give you a total view. Call it what you will!

Source #1 - so here I copy the contents of source #1 after trimming out any fat. Now I have a local copy of the data to look up separate to the original source file. This way I can keep all the data contained in my file without having to link to it and risk breaking anything.

Source #2 - see above
Source #3 - etc
Source #4 - etc

Reports - Using the Calculations worksheet I generated the most interesting graphs to me based on the data I had so far plus later ones I would know I'd need and use, even though the data points were empty at that time. I still had to work out the formulae but once done it's easy to edit later.

Calculations - You can do calculations at the end of any worksheet but it gets messy after a bit as the column widths are based on the data and don't suit the analysis. You can simply just cut and paste the calculations however to a central worksheet. This keep the data sources clean and allows you to keep all calculations grouped so now you can even link one to another easily. I didn't start this way but after all my calcs ended up on my physical inventory data source I decided to migrate them elsewhere to keep that source from becoming messed up any further.

List - Used to create pre-populated drop down lists for any of the other worksheets. I'll hide this later.

Save your file often but also create a new file by copying the old one and incrementing the version number. This protects you against data corruption later. It's like an old style backup. Do this every day and keep the last 1 or 2 versions only. Also kick up the latest copy to Sharepoint or some other backed up repository as a safeguard and to share with colleagues.

Most of the formulae I use are simple lookups and countifs statements; count the number of servers running windows, unix etc. I won't delve onto those here but you can use google which is where I learnt most of the tricks I use today. Take the time to learn this yourself and understand how they work as there's no substitute here for experience. Don't always take the first google result either, later Excel versions have better ranges of formulae so you may find more than one way to get what you need that is simpler and faster. This may break however if the customer wants an export to a much older version. Know your audience!

That's all for now. There's lots more you can do of course but mainly be organized. The worst job I ever gave myself was updating a data source line by line and square by square when I didn't think through how I would incorporate future changes! And as always, enjoy!!



Monday, 18 January 2016

REST APIs - Part II

REST APIs - Part II


Now we've connected to HP OneView, let's step back and try the same with Veeam Backup & Recovery 9.0. Just to see what the difference is?!

You can find the document "Veeam Backup RESTful API" here:
https://www.veeam.com/backup-replication-resources.html
See Page 66 onwards for the beginners example.

With Veeam you can use a standard browser to do what POSTMAN does, or you can just use POSTMAN instead. Normally you can just issue GET commands with a browser but Veeam are clever people!

If you try the following URL first to check the API is running, it's actually a Windows Service you can view called "Veeam RESTful API Service": http://<Enterprise Server IP Address>:9399/web


You can see links up the top right for Tasks and Sessions but these will only give you a 401 error as you're not authenticated. Now, you can use the web browser to generate an authenticated session but you'll have to encode the username and password in base64. My preference is to switch to POSTMAN which makes this easier as it does the encoding for you!!

Here is the same GET request in POSTMAN:

Next click on the first sessionMngr link, in my case:
http://192.168.10.15:9399/api/sessionMngr/?v=latest"
This opens a new tab in POSTMAN, we'll make some changes to request our session ID and authenticate.
So, change the drop down for Authentication to show "Basic Auth" and change this to a POST command. Click Send and you should get the following:

The SessionID near the end is the key line we want. The URLs provided will are be useful for the next steps, to action particular things we want Veeam to show us or do.

To start with we'll follow the Beginner Guide and get a list of Veeam Backup Servers. You have a list of links, click on the one ending in /api/backupServers. Add the session ID into header as follows:

My Header is called "X-RestSvcSessionId" and I've pasted in the SessionID into the Value field. I've done a GET and received information on the single Backup Server in my Lab and some server specific URLs I can interact with.

My goal is to get the link to the backup template job and fire it up via an API. I know Veeam can schedule things but I figured this was an easy first step to try.

Click on the jobs link of type "JobReferenceList" and it opens a new tab again. Drop in the session ID once more and run the GET. Remember there's a 15 minute timeout so you may need to generate a new sessionID!!!
You can browse to a previous Tab to copy elements. Also the Disk save button to group and keep useful tabs/commands to replay later.

Now I can see a link for my "Backup Templates" job. I just have to call it out and tell the API I want to run it. Then monitor it for results. After that I can close the session.

Page 155 has the POST command syntax for starting a job. The bit at the end is the key part.

Request:
POST http://localhost:9399/api/jobs/78c3919c-54d7-43fe-b047-485d3566f11f?action=start

Request Header:
X-RestSvcSessionId NDRjZmJkYmUtNWE5NS00MTU2LTg4NjctOTFmMDY5YjdjMmNj

I just need to click on the supplied URL and add the header to my new tab and see if it works! Not quite as easy as that, remember to add "?action=start" to the end of the URL request to tell it what you actually want to do!!!







The Job is now running in Veeam:




Next, how do we get a job status to see if it's finished / failed / succeeded? You can check it's started with this:

Next we can check existing backup sessions:
http://192.168.10.15:9399/api/backupSessions

This gives you some details of jobs with timestamps against them. You can probably zone in on a particular session or use the reporting options but it's not as intuitive as the Veeam console so I'll leave those possibilities up to you!

The POSTMAN Client has "Collections" entities and the option to save those tabs you worked on under the Collections. This makes it easier to replay although the sessionID still needs to be changed but you can use the Authorization option to authenticate once off commands. I'm sure DEVs can script injecting a valid sessionID into their scripts!

Now to delete the sessionID there is a DELETE command
http://localhost:9399/api/logonSessions/695f7cda-e4a6-4d9c-9603-8a6b05693c57
In my case:

That's it!

So why use this? Once you abstract the commands you use to interact with a Product and you have other Products which work this way such as OneView, you can create higher level relationships to automate things better.

How about querying a large replication task is completed before firing off a backup? Or pausing backups if a Datacenter or Host is running hot via the temperature alerts in OneView?

What tasks are performed manually regular enough to script and can you adapt the script so when a new vCenter, ESXi Host or VM is created the script can handle the change with dynamic queries?

That's just the tip of the iceberg! How about deploying a new ESXi Host automatically when OneView detects high resource consumption? Or when DevOps deployed Apps on CoreOS/Docker run slow triggering a build of more CoreOS VMs and sets up Veeam Replication for them automatically?

On REST in other applications, it's been a while since I looked at vCenter Orchestrator, now called vRealize Orchestrator. It's no longer baked into the vCenter deployment so you've to download, install and configure a vCO appliance. Then you create vCO workflows which you can call on with REST. In other words it's second hand integration, you need a person who understands vCO to setup all flows before you can call them with variables via your REST client. Not so hot but can be done.

Good articles that cover REST and vCO here:
http://www.vcoteam.info/

Enjoy!





Thursday, 14 January 2016

REST APIs - Part I

REST APIs - Part I


In this post I wanted to introduce APIs a bit. I've used Powershell and PowerCLI with VMware mostly for many years but with DevOps, Ansible, Github, Openstack, Chef, Puppet etc REST is now becoming a skill I need to understand, or at least know about.

I loaded up HP OneView 2.0 and Veeam Backup & Recovery 9.0 to get a feel for using an API to drive things. Both were a little different but interesting to explore in this way. Of course you can still use Powershell but this exploration was intended to expand my knowledge beyond the traditional tools I was familiar with.

Firstly you need a REST Client. I used a Chrome Application/Extension. You'll see the Apps icon on the Taskbar on the left as shown below:

Next, Click on the Web Store

Now do a search for REST. Here's a few - I've heard a lot about POSTMAN which is what I'll be using here

I've already installed Postman, you just need to click "+ Add to Chrome" and click on the Chrome Apps Taskbar button and this time you'll see the new App

Next Click on the new Application and a new Browser Window opens up for you to play with!

Give yourself time to get used to the interface. What we'll do next is some basic requests with OneView and get used to using this tool to query and operate this API.

Once you have configured the OneView administrator password and set IP Address etc we're all ready to go. You should be able to browse to the admin interface with a browser. Now we'll access it with POSTMAN and see what that looks like.

Do a "GET" and put in the URL to the OneView appliance, in my case "https://192.168.10.51" and Click Send. You should get a 200 OK Status Response. The fun starts here!!!

Now, we need to get authentication worked out by generating a sessionID. We do this by adding some headers. There are two needed, by the way I'm following the guide here:
http://h17007.www1.hp.com/docs/enterprise/servers/oneview1.2/cic-rest/en/content/s_start-working-with-restapis-sdk-fusion.html
Content-type: application/json
Accept: application/json
The headers should match mine shown below

Now make sure you change the type to POST and edit the URL to add "/rest/login-sessions" and then click the Body section and choose RAW and enter in the following into the text box below
{"userName":"administrator","password":"mypassword"}

change the password for your environment and Click SEND. You should get a sessionID in the box lower down with a Status 200 OK. The sessionID can now be copied and added as a third header row so you can send authenticated commands to OneView as shown here

Then Run a GET and set the request to be https://192.168.10.51/rest/version and see if you get this result

You are now set to have some fun!! 

HP Lists the API commands on the following page:
http://h17007.www1.hp.com/docs/enterprise/servers/oneviewhelp/oneviewRESTAPI/content/images/api/

The commands each have an examples and vary from basic to complex. Let's try a few.

List the Users on OneView

Click the Security/Users link on the API webpage above. It shows a GET option with url /rest/users. You can click on the triangle to the left to expand this command for an example. The top grey box appears to be the command used and the expected result is shown in the second box. 

So, by just changing one word from the last test from "version" to "users" and adding an extra header for X-Api-Version: 100" I can SEND this and get the result below

Add a User to OneView

Now let's try adding a User. Copy the command given against "POST /rest/users" including the {} and POST it as follows:

Now you should see the user in the OneView web console. You just make your first step into DevOps Territory!! Well Done!!

I'll cover Veeam and more options in OneView in the next Post, Enjoy!







Wednesday, 6 January 2016

Which is better? 1 vCPU or 2 vCPU standard VMs

Which is better? 1 vCPU or 2 vCPU standard VMs


I came across a comment that it's better to use two vCPU in your VM template rather than a single
vCPU. It is meant to perform better, schedule better and scale better than just a single one. Now I had my doubts but I've always liked testing these things on a real server to see what happens. My test rig has a Xeon 4 core cpu with lots of GHz so I set up 4 and then 8 VMs and tested different loads and configs and have the ESXTOP results below.

Test #1: 4 x 1vCPU VMs (5 minute test, using 1 core on each VM at 100%, 200MB memory load)

No %VMWAIT
Constant %RDY
Constant %OVRLP
So, copes well, nothing too crazy here. 

Test #2: 4 x 2vCPU VMs but only 1 core maxed (5 minute test, using 1 core only on each VM at 100%, 200MB memory load)


Periodic %VMWait
Constant %RDY
Constant %OVRLP
So, this actually performed better as with loadmaster it used one thread but scheduled them between each available vCPU and got overall better performance. Interesting! 

Test #3: 4 x 2 vCPU VMs but used two of them maxed out cores (5 minute test, using 1 core on 2 VMs only at 100%, 200MB memory load)

No CoStop issues seen
Constant %OVRLP on busy VMs
Constant %RDY on all 4 VMs
Periodic %VMWAIT on 2 idle VMs
So, this time things aren't too bad, but the idle VMs are probably starved a bit until they start ramping up also. 

Now we go into over commitment, exceeding the physical cores available by stacking up more than 4 vCPUs:

Test #4: 8 x 1vCPU VMs (5 minute test, using 1 core each at 75% load, 200MB memory load)

Constant %OVRLP on all VMs
Constant %RDY on all VMs
Pegged the physical cores but only %RDY really standing out. ESXi scheduler is doing it's job nicely!


Test #5: 8 x 2vCPU VMs (5 minute test, using 1 core each at 75% load, 200MB memory load)

Only difference between this and last test is increased number of vCPU per VM, still only running single threaded 75% load but it’s switches between Core 0 & 1 inside the VM.
Constant %VMWAIT on some VMs
Constant %CSTP on all VMs
Same workload, but now getting scheduling conflicts as cpu overprovisioning is 16 vCPU to 4 pCPU vs previous test of 8 vCPU to 4 pCPU. Which one do you think performs better?!! Now Co-Stop isn't too high but with the same number of VMs we heading into performance trouble territory. 

Test #6: 8 x 2vCPU VMs (5 minute test, using 2 threads @ 37% load, 200MB memory load)

This is similar to previous test but we’re now directly addressing the second vCPU in each VM.
Constant %CSTP on all VMs – very high level despite similar workload, just running two threads instead of one, performance on this would be awful. 


Notes:
%OVRLP – Time spent on behalf of a different resource pool/VM or world while the local was scheduled. Not included in %SYS.
%WAIT – Time spent in the blocked or busy wait state.
%RDY – Time CPU is ready to run, waiting for something else.

NWLD – Number of members in a running worlds resource pool or VM. 
(increases when # vCPU goes from 1 to 2)

So, I would say from what I saw that if you never over provision your physical cpu and keep a 1 to 1 mapping (i.e. never exceed the total number of physical cpu cores with the total number of virtual CPU cores) then you might actually get better performance with single threaded workloads. 

Once you get into overcommitment you're looking at issues. You're opening more CPU paths for VMs to take down the physical cores and while VMware does an amazing job, with like for like workloads, a lower number of vCPU performs better, or at least I would expect it to based on the ESXTOP results above. 

So if you have a static environment you have a choice. If you're a consultant and not hands on day after day as an admin on a particular customers environment then I would say you're taking a chance with 2 x vCPUs in the template. I would expect the customer to be calling you within a year and complaining about really bad performance during critical end of month periods and while I would expect a storage issue in this case the configuration caused by too many vCPU would require right sizing all the VMs and take downtime for each, not always easy or possible......makes your choice, takes your chances!