" />

mikerochford.com Being technical


Find all the perl modules in your script and make sure they are installed

Ever migrated code from one server to another and never remember if you captured all the modules that need to be installed? I have, its a pain in the ass sometimes. So I thought I would throw together a little script to check if I have all the modules I used installed. I have written it in python and perl (Dont ask!)



mrochford@mrochford-linux:~/bin$ ./perl_module_check.pl simpleMenu.pl
Found 5 modules in simpleMenu.pl
Testing Config: OK
Testing Data::Dumper: OK
Testing FindBin: OK
Testing Strings::IO: NOT INSTALLED
Testing strict: OK


Naigos Plugin: Check for packages to update in Debian/Ubuntu

Over the years I have manages a lot of systems, mostly debian based. The thing I really hate about having a lot of standalone systems is updating software. I am really bad at keeping track of new updates and I really dont want to have a cron script running to send me a email every day. Below is a script I used as a plugin for nagios.

The Script: check_for_updates.pl


Backup of your MYSQL server using mysqldump

Over the years of being a admin I have seen many ideas on how to backup a mysql database server. Many people are satisfied with with one large dump of all databases into one flat text file. Others think copy the actual db files to other locations are the way to back things up. I have found dealing with customer and internal databases you need incremental backups. With all that said I developed a perl script that will log in grab all existing database names and do a mysqldump of each database. It will also keep a specified amount of backups and you also can choose to have some logs gzip'ed or left uncompressed. Below is the link to the code. You will have update login credentials and the ip address of the mysql server.

Link: mysql_backup.pl

Explanation of the script:

The script will login to your mysql server and issue a "show databases;". Once it the database names are retrieved it will run a mysqldump on each database and store the output in the specified directory. The name scheme of the out will be database_name-YYYYMMDD.sql. Doing this it will allow you to sort based on date. Once all mysqldumps have completed it will go through each database directory and audit what should be saved, deleted, gzipped or leave alone. You will be able to chose what the numbers of day you want to retain and the how many days you want to leave as a uncompressed file.


Connecting to a machine using sockets

I work with a api system that connects on a high port. My co-work Matt came up with a great way to set the socket up and verify if the socket isn't stale. This sub-routine is specific to the api but you can use the theory and apply it to pretty much your socket needs.

sub cp_connect {
     # This function connects to a cp server
     # Accepts:
     # $remote_host The server to connect to
     # $remote_port The port to connect to
     # Returns:
     # A critical Path mail server socket connection.

     my $remote_host = shift;
     my $remote_port = shift;
     my $remote_password = shift;
     my $remote_conneciton_type = shift;
     my $EOL = shift;

     if($remote_conneciton_type =~ m/^rw$/){
          $remote_password = $remote_password ." write";
     my $socket = IO::Socket::INET->new(
          PeerAddr => $remote_host,
          PeerPort        => $remote_port,
          Proto           => "tcp",
          Type            => SOCK_STREAM,
          Timeout         => 5 )
          || die "Couldn't open socket!\n\n";

     my $answer = <$socket>;
     print $socket "LOGIN $remote_password" . $EOL;
     $answer = <$socket>;
     if ($answer !~ /^OK/) {
          print "Failed to login to $remote_host : $remote_port : $answer\n\n";
     return $socket;

Once you have the understanding of how the sub-routine works. Building a script that interacts with the api is a piece of cake.

Here is an example of how we use the sub-routine in a script. This example will check to see if the socket is defined, if not it will try to connect and define the socket. Once the socket is defined it will issue a command and read the socket.

if(!defined $socket) {
    $socket = cp_connect($remote_host, $remote_port, $remote_rwpass,"rw",$EOL);
print $socket "DOMAIN ENUMERATE". $EOL;
while (defined (chomp($answer = <$socket>))){
    if($answer =~ /\*\s+(.+)\s/) {
        $domains{$1}{'users'} = ();
    elsif($answer =~ /^ERROR/){
        $error = $answer;
        return 0;
    elsif($answer =~ /OK/) {
        return 1;

sub-routine: gettime();

Over the years working as a systems administrator I have a large sub-routine collection on my belt. One that I use quite frequently is gettime();. This is a simple routine someone in my group wrote to return a nicely formatted timestamp. It is use mostly for logging and such.

sub gettime () {
    my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
    $year += 1900;
    $mon += 1;
    my $retval = sprintf("$year\-%02d\-%02d %02d:%02d:%02d",$mon,$mday,$hour,$min,$sec);
    return $retval;


print gettime(),"\n";


2009-05-27 07:18:43

Hiding your STDIN

Ever write a script that you have to enter a elevated users password? or even your own? Check this out.

use Term::ReadKey;
print "Please enter your username: ";
chomp(my $username = <STDIN>);
print "Please enter your password for $username: ";
chomp(my $password = ReadLine(0));

#for example only. You shouldn't print this...
print "\nYou entered $username for a username and $password for a password!!\n";

Using Term:ReadKey you can disable echo for STDOUT. This way no one will see your password from over your shoulder.

Tagged as: , 1 Comment

Howto access an array that is in a hash

Say you have a hash element that contains an array and you want to loop through the array set. You usually have these types of data sets when you are trying to gather data.

We will use the following code to build the hash:

use Data::Dumper;
my %data_set;
$data_set{'users'} = ();
$data_set{'users'}[0] = "testA";
$data_set{'users'}[1] = "testB";
$data_set{'users'}[2] = "testC";
$data_set{'users'}[3] = "testD";
$data_set{'users'}[4] = "testE";
$data_set{'users'}[5] = "testF";
print Dumper(%data_set);


$VAR1 = 'users';
$VAR2 = [

Now that we know the hash element is populated we want to access each array element. This can be done with making some crazy counter in a while loop. Check it out.

foreach my $user (@{$data_set{'users'}}){
print $user,"\n";

Now knowing you can access a array with in a hash using @{} you can push,pop,splice and shift on the array that is in a hash.

Here is the same script above but using push:

use Data::Dumper;
my %data_set;
$data_set{'users'} = ();

foreach my $user (@{$data_set{'users'}}){
print $user,"\n";


Howto push a hash onto an array

This information was giving to me by my friend Matt
use Data::Dumper;
my @array;
push @array, {'key1' => 'value1', 'key2' => 'value2'};
push @array, {'key1' => 'value1', 'key2' => 'value2'};
push @array, {'key1' => 'value1', 'key2' => 'value2'};
print Dumper(@array);

Will give you:

$VAR1 = {
'key2' => 'value2',
'key1' => 'value1'
$VAR2 = {
'key2' => 'value2',
'key1' => 'value1'
$VAR3 = {
'key2' => 'value2',
'key1' => 'value1'


Finding a hostheader in IIS using Perl and Win32::OLE

There is no easy way to find a host header in IIS. Unless you think clicking on each site is easy. I have found it difficult to find them in large installations. I wrote a script that will go through each site on the server and look at the host headers and then if found it will print the Site comment. You then use the site comment to find the main site in the MMC.

Here is the script.


Changing all directory paths in Microsoft IIS/Active Directory using perl and WIN32::OLE

I was put in charge of a filer migration from a old netapp f810 to a new netapp 3040 cluster. One of the systems we have that used the netapp backend was a web cluster running MS Windows 2003. When it was first built it was using a cifs shares by system name (ie: \\f810..tld\customer-home). This is great if you never changed filers but a poor idea on our part. In theory you could have just changed the record to point at the new filer, but that was a temporary fix. We wanted to change the path completely to a arbitrary name like "web". This will allow us to change it to any back end device no matter what it is. I like to do things programmaticly so below are the two scripts I used to get the job done. You will need to modify these script to fit your needs (ie: Paths).

Bulk IIS path change script: Link

Bulk Active Directory change script: Link