NetScaler: Generate a simple usage statistic per Vserver

One of my co-workers recently approached me, that he needed a simple shell script which would generate a simple report about a Vserver’s current connections. After ironing out a few things with him (he had the intention of it being on a CIFS share on our file-server – which I changed to a simple HTML page) I went to work.

Out came two scripts. One is the collection instance, and the other is the processing instance. First the collection script runs, finds the current HA master node and then collects the Vserver’s current connections. After that script has dumped the information (date, time, current connections) into a file, the processing script will go and create a simple HTML page that’ll show exactly those informations.

You’ll also need to have configured the public key authentification on both HA nodes, since entering a password in combination with scripting is a bit lame.

 

 

NetApp – Copy LUN mappings

Well, today I had another idea (basically like the one I wrote for SVC’s VDisk mappings a while back) for a script:

I’ll post the counterpart of the script (to remove the LUNs) in a second post later on.

NetApp – Get a list of volumes containing too much LUNs

Well, after figuring things out (and realizing that if you create a LUN in the same size as the volume it’ll break), I decided to write yet another script to figure out which LUNs needed fixing.

And with that you have a list of volumes, with the amount of space they need to resized in order to accomodate the contained LUNs and the snapshots.

Convert a bunch of PDF documents to JPEG

Well, I found a bunch of PDF documents on my disk today, which I wanted converted to JPEG. Now, Debian replaced ImageMagick in the past for GraphicsMagick, which is supposedly a bit faster and leaner than ImageMagick. So first you need to install graphicsmagick — or rewrite the script to use /usr/bin/convert instead.

The script basically takes every .PDF you have in your current working directory, creates a sub-directory, and then extracts each page of the PDF into a single JPEG image in that subdirectory.

What I had to google for was basically on how to actually pad the output number. According to the man-page of gm, you just put %02d (or %03d, depending on how much pages your PDFs have at the max) in the   desired output file name.

Handling files/directories with spaces in `for’-loops

So I have one or the other file, that needs to be extracted to a directory. And why not name it as the archive itself .. Only problem with it is the handling of variables with bash …

Try it yourself, stuff some directories with a space in inside a variables, and use something like this:

And now take a look at the output of that ..

Means, the mkdir' created a directory for every entry in the ls‘ output that was separated by a space char … and I’ve no frickin clue on how to get that thing right … 😡

Update:
Thanks to Roy I know how one handles such things … 😳 It’s rather simple *g*

That should give you the desired effect ❗