Category Archives: Linux

converting vmware windows virtual machines to citrix xenserver virtual machines

after hunting around for quite some time, along with a lot of trial an error, this process ended up working good for me.

1. download (MergeIDE unzip it and run it on the vmware instance you want to move.

2. uninstall the vmware tools.

3. save all your network settings. run:

ipconfig /all > network.txt
netstat -rn >> netstat.txt

4. on the XenServer, create a new windows VM with the same CPU/Memory/Disk Size specs as your vmware VM

5. get rid of any snapshots you had made for the vmware instance.

6. depending on what form of vmware you are running (workstation/server/esxi/esx) you might have to convert the disk image using something like vmware-vdiskmanager (which comes with all the vmware products):

vmware-vdiskmanager -r vmware_image.vmdk -t 0 temporary_image.vmdk

if you arnt sure if you need to convert the disk, it doesnt hurt to convert it anyway. You just might waste time and disk space. If you are using esxi, you don’t need to convert the disk.

7. assuming you have access to the vmware .vdk disk image, run this from a linux box:

qemu-img convert ./name_of_source_vmdk_file.vmdk VM_Instance_Name.img

8. you need to access the new converted disk image from the XenServer, so put it on an NFS mount or something.

9. First figure out the UUID of the disk image you created when you created the new xen instance. Its much easier if you go into the XenCenter GUI and go to the instance you created, and rename the disk something useful.

Then you can ssh into the XenServer and type:

xe vdi-list  name-label=whatever_you_named_the_disk

copy down the UUID of the disk image.

Then run:

xe vdi-import uuid=uuid_of_disk filename=name_OF_SOURCE_DISK_IMAGE.img

after a while, you get dropped back to the prompt, and you can fire up the XenServer instance.

SNMP on a cisco 6509 and intermapper

at my work we use [tag]InterMapper[/tag] to monitor all our equipment. I was trying to get the [tag]SNMP[/tag] probe it has for [tag]cisco[/tag] equipment to work with our Cisco [tag]6509[/tag] switch, but apparantly cisco decided that it would be fun to use completely differant OIDs for that line of [tag]switches[/tag]. so I spent hours yesterday trying to get it to work.

Sure, cisco has a nice repository of all the [tag]MIBs[/tag] for all their equipment, but they are all uncompiled and missing the actual OIDs.

Granted I am not nearly as familiar with SNMP stuff as I would like to be, but come on.

Look at the number of mibs available just for the 6500 series:
ftp://ftp-sj.cisco.com/pub/mibs/supportlists/wsc6000/wsc6000-supportlist-ios.html

All I am looking for is the CPU load and the amount of memory available. For the 5 second CPU Load according to the MIB file this is what I need:

cpmCPUTotal5sec OBJECT-TYPE
SYNTAX Gauge32 (1..100)
MAX-ACCESS read-only
STATUS deprecated
DESCRIPTION
"The overall CPU busy percentage in the last 5 second
period. This object obsoletes the busyPer object from
the OLD-CISCO-SYSTEM-MIB. This object is deprecated
by cpmCPUTotal5secRev which has the changed range of
value (0..100)."
::= { cpmCPUTotalEntry 3 }

Part of the fine is the deprecation chain. As you can see in the mib excerpt, cpmCPUTotal5sec was deprecated by cpmCPUTotal5secRev. If you go to the cpmCPUTotal5secRev section, it says it was deprecated by cpmCPUTotalMonInterval, which when you go to that section. But of course the only one of those that is actually in our version of the 6509 is cpmCPUTotal5sec.

Anyway, It sure would be nice if the OID was listed in that mib file. Then I find this file:
ftp://ftp.cisco.com/pub/mibs/oid/CISCO-PROCESS-MIB.oid

One of the lines says:
"cpmCPUTotal5sec" "1.3.6.1.4.1.9.9.109.1.1.1.1.3"

So I should be all set now right? no.

This might be an issue with our version of intermapper, because if I use snmpwalk like this:

snmpwalk -O -v 2c -c CommunityName IPAddress 1.3.6.1.4.1.9.9.109.1.1.1.1.3

I get this result:
SNMPv2-SMI::enterprises.9.9.109.1.1.1.1.3.9 = Gauge32: 21

It sure looks like that should work. I get a value and everything! So I write the custom SNMP probe for InterMapper with the 3 OIDs I want to watch. But none of them work, InterMapper claims none of those OIDs are available in the switch. Of course snmpwalk disagrees. So I figure I just completely messed up writing the probe.

So this morning I come into work figuring I would give it a fresh go. I happened to be looking through the options for snmpwalk, and notice the “-O n” option, which prints out the OID numerically. Which returns:
.1.3.6.1.4.1.9.9.109.1.1.1.1.3.9 = Gauge32: 21

So apparantly, my problem the whole time was that InterMapper wants the OID to look like this:
.1.3.6.1.4.1.9.9.109.1.1.1.1.3.9
Instead of this:
1.3.6.1.4.1.9.9.109.1.1.1.1.3

Not sure what the .9 at the end does, but go figure… It sure would be nice to just make the OID available in the first place. without jumping through so many hoops.

For anyone that cares, These are the OIDs that seem to make the most sense:

cpmCPUTotal5sec .1.3.6.1.4.1.9.9.109.1.1.1.1.3.9
cpmCPUTotal1min .1.3.6.1.4.1.9.9.109.1.1.1.1.4.9
cpmCPUTotal5min .1.3.6.1.4.1.9.9.109.1.1.1.1.5.9
ciscoMemoryPoolFree 1.3.6.1.4.1.9.9.48.1.1.1.6
DRAM .1.3.6.1.4.1.9.9.48.1.1.1.6.1
FLASH .1.3.6.1.4.1.9.9.48.1.1.1.6.6
NVRAM .1.3.6.1.4.1.9.9.48.1.1.1.6.7
MBUF .1.3.6.1.4.1.9.9.48.1.1.1.6.8
CLUSTER .1.3.6.1.4.1.9.9.48.1.1.1.6.9
MALLOC .1.3.6.1.4.1.9.9.48.1.1.1.6.10

Memory stuff
ftp://ftp.cisco.com/pub/mibs/v2/CISCO-MEMORY-POOL-MIB.my
ftp://ftp.cisco.com/pub/mibs/oid/CISCO-MEMORY-POOL-MIB.oid

CPU/Process stuff
ftp://ftp.cisco.com/pub/mibs/v2/CISCO-PROCESS-MIB.my
ftp://ftp.cisco.com/pub/mibs/oid/CISCO-PROCESS-MIB.oid

InterMapper Cisco 6500 Probe:
http://aisle10.net/intermapper-snmp.cisco6500.txt

expensive equipment, a hammer, backups, and disaster recovery; A good mix

I found out yesterday that apparently using a hammer and a phillips head screw driver to drive a [tag]SCSI[/tag] cable through a maybe 1/8 inch opening between my desk and the cube wall it is screwed into is a bad idea.

I spent a couple hours between yesterday afternoon, later on that night, and some time this morning trying to figure out why my linux box refused to acknowledge the existence of the Sun StorEdge L8 [tag]LTO[/tag] tape [tag]autoloader[/tag] I hooked up to it. I didn't think the screwdriver actually went into the cable at all. It just looked like it busted into the magnet that surrounds the cable near the end. That thing really needed to be driven through the desk. On the good side, it gave Bill and I a good excuse to use a hammer and a bunch of prying tools to "install" a tape autoloader.
I have been trying to implement a fairly reliable backup system for a few small file servers we have at the office. The previous group of people that managed the backups for these systems had a [tag]disaster recovery[/tag] plan that involved having a rotation of backups that traveled through 3 separate physical locations. It seemed like a bit overkill, but then again, it is better to be safe. The funny thing is that the [tag]backups[/tag] were all on a bunch of 4mm 20 gig (uncompressed) tapes. The 3 servers that were being backed up totaled somewhere around 500 gigs…maybe a bit less. The best part was that between the 3 servers there where only 2 tape drives. 2 very slow tape drives. Plus, the majority of the data that was being backed up was uncompressable. movies, audio, and pictures mostly. So this involved a lot of tapes. It took a good 3 hours for 1 tape to get filled. They would get no notification it was ready for the next tape, so every couple hours they would go and log into the machine, or just check if the tape drive ejected a tape, then switch it, and rinse and repeat for the 2 day (or more) long backup. Luckily incremental backups weren't as bad, but most of the time I don't think they could even happen given how long a full backup would take. If you forget to change the tape for a while, you just might have wasted a whole days worth of time that the backup could have been chugging along. The tapes would get put into a plastic tape case that looked like it was supposed to be rushed to the hospital for a life saving organ transplant. Then that would get carted off to the first off site location in the big 3 location backup plan.

Then the group that had been handling these backups..plus a bunch of other tasks got moved to another location because of "streamlining" how their group worked. Which is when My co-worker and I got stuck with all the fun. Neither one of us had the time to keep checking to see when the next tape needed to be changed, so a full backup would take maybe 2 weeks to finish.

Anyway, that is a bunch of back story that doesn't really matter. I really wanted to just complain about [tag]Backup Exec[/tag], and some oddness associated with the Arkeia trial installation I have been working on. The whole old backup system for these 3 machines used Backup Exec. I really really really don't like Backup Exec. The UI was poorly designed, the server has to run on a windows machine, and [tag]Veritas[/tag]/[tag]Symantec[/tag] decided to screw over their customer base and not offer any encryption option unless you upgraded to their $20,000 Enterprise "we screwed you" 2.0 package (i made that price up). I didn't realize that until I was going to upgrade the 3 client installs, and the 1 Backup Exec server to their most recent version. But, I did get a chance to try out the Sun StorEdge L8 autoloader we have had laying around for who knows how long. The L8 uses 200gig LTO tapes (400 compressed), and when I tried the first backup on the trial of the new Backup Exec, The entire backup of the 3 systems took around 4 hours to finish, and everything fit on a tape and a half. On the bad side, the L8 only holds 8 tapes, one of which is a cleaning tape, so really 7. That isn't a safe number for a full mostly automated backup strategy, but it is still much better than the previous setup.

After I found out about the lack of encryption support, that got weighed in with the crappy UI, and the need for a windows 2003 server, we decided to try something else, and since my co-worker loved [tag]Arkeia[/tag] so much, I figured I would give that a try.

For a test install, I hooked the Storedge autoloader up to a [tag]Sun V120[/tag] running [tag]Solaris[/tag] 10, and got a bunch of trial licenses for Arkeia. The installation was completely painless, everything was pretty straight forward. The only part that took any time was getting the v120 to recognize the autoloader, but that can't be blamed on the software. It was more my lack of knowledge.

Arkeia has a really well thought out X interface that everything can be setup from, and you can install the server on a variety of platforms. Solaris, Linux, FreeBSD..etc. Most installs involve just typing rpm -i, or dpkg -i or ./install, depending on the packaging system on the server. I was pretty surprised on how well thought out everything was.

After I got everything going, i tried the first backup. I left encryption off, and figured I would try the best (compression wise) compression method, which was [tag]LZ3[/tag]. The backup gets started, and I looked at the fun little speedometer the X interface displays during an interactive backup. You can see a bunch of differant metrics, like MB/h, MB/min, MB/s, KB/s for both the network and the backup speed. This is when things started to go downhill. The max backup speed I was getting was 5 gigs an hour. Then I thought maybe the compression was too much for a v120. The load on the machine was a little over 1, but still, something didn't seem right. I tried the backup again with no compression this time, and left work for the weekend (this was on Friday). Some time Saturday I log in to see how things are going. and in 33 hours it has backed up a whopping 144 gigs. This was never going to finish. I tried a bunch of differant things, then on Monday, we tried doing an scp of a large file from the v120 to various other machines. I was getting the same crappy throughput. The port on the switch was set to auto negotiated, so I tried forcing it to 100/full duplex, but no difference. It must be a misconfiguration of some kind either on the switch or with the interface on the server, but it was happening on a couple of the other servers on that same bank of switches, so I figured I would just try a more localized test install on my [tag]Sun Ultra 20[/tag], which is running [tag]OpenSuse[/tag] 10.0/64bit. Arkeia had an rpm for Suse enterprise 64 bit, and that installed without a problem.

I really didn't want to shove the autoloader under my desk, and I found a SCSI cable that was long enough to let me put the autoloader on the corner of my cube against a wall. The only problem was that the hole in the desk for cables to pass through can't fit the whole SCSI cable end. Which left me with 2 options. Leave it under the desk, or figure out a way to get the cable up behind the desk. Which is where the [tag]hammer[/tag] and a bunch of large screw drivers came in. My co-worker pried from the top, and I was prying with another screwdriver from the bottom while trying to push the cable through the little opening. I was thinking how funny it would be if we ripped the desk out of the cube wall by accident and the whole thing crashed on top of me (including my co-worker) but the cable got through. Except for that damn metal cylinder at the end of the cable. This was going to take some finesse. After trying everything. I decided to use a philips head as a wedge, and just smacked it as hard as I could until the stupid metal/plastic/rubber thing went up through the crack….with the screwdriver inside. The cable looked fine, but apparently it wasn't.

This morning, after trying everything I could think of to get my system to recognize the new scsi device, I figured I would try another cable, which all I could find was a little 3 foot long cable. So under the desk the autoloader went. It is actually just balancing on top of a little [tag]terastation[/tag] NAS device. If I touch it with my foot by accident, I am sure it will flip on its side, but that is part of the fun.

So, I plug in the autoloader, reload the scsi card module, and low and behold, there it is in all its glory. So I set Arkeia up real quick and get a backup going. No compression or encryption which is the same as the last backup I did on the Solaris install. The backup speed now is averaging 30-40 gigs an hour.

I have no idea what was up with the v120, but if you saw our network closet, our network…actually, any of our stuff, you would run in horror. So now I can add that to my never decreasing list of tasks.

"figure out why throughput on half the equipment sucks"

The funny part I guess is that my Ultra 20 is my main workstation. I wrote this post on it, in [tag]KDE[/tag], with a bunch of other stuff running all during the backup.

dodging a microsoft bullet

Lately I have been building and maintaining more and more Windows 2000 and 2003 servers than I would ever like to. I think it ended up being basically a necessary evil that needed to be used to tie the many different system architectures, systems, and company divisions together.

having something even remotely close to a single sign on type of authentication system would be great. Every time an new employee starts at my work, there is at least 3 to 5 separate accounts that need to be created.

1. The phone system

2. a windows login

3. a unix login

4. a login to our CMS

5. a login to the horrible “email system”

Most of the unix (Solaris/Linux mostly) systems that we have use at least NIS, but everything else is completely separate. Getting a working phone number and a working windows logins both come from completely different departments…actually different buildings.

I can see how annoying this must be for a new employee. You sit at your desk trying to adjust to your new job and you can’t receive email, maybe have no phone, possibly no computer. I think at this point they should just get a 3 subject notebook, a couple folders, 2 post-it pads, and a pen the day they start, because that will get them a lot further.

So, when a couple weeks ago my office had a massive phone outage due to some “issues” with a telecommunications company that begins with a V. We ended up having literally 43 non-working phones. That is easily more than half the company that could no longer use the phone. The phone system isn’t controlled, maintained, or basically touched by me or anyone else in my department. It is handled by a separate division of the company that for the most part doesn’t want to be bothered with our stupid phone problems.

Nothing was getting fixed, technicians are poking at everything attached to the phone system, and my time (along with others) gets wasted more and more. So we decided it was time to start cutting the few life lines we have with the other division in the company. They have an archaic poorly maintained phone system that we can’t diagnose anything on, and sales people don’t like you very much when they can’t use their phone. Or even better, when they will be in the middle of a call with a possible client, and the phone will just drop the connection. There are many reasons, the list goes on and on.
So it seemed like this would be a great chance to just ditch the old phone system and install a new shiny VoIP phone system. We figured out that we could maintain all our offices phones internally on the VoIP system, and then any incoming/outgoing calls from outside the office would go directly to the old phone systems switch.

So then after thinking things out, this would be a great opportunity to finally start using LDAP for all our user accounts. This quickly changed over to making an Active Directory. Enter Microsoft.

If we installed an Active Directory, now we can get off of the other divisions old slow windows NT domain. We could be able to now create all the windows accounts ourself, meaning employees could actually login to their computer when they come in for work. Sounds great doesn’t it? but now that means that the Active Directory is in charge of everything. Is that a bad thing? I don’t really know, but I (and most people I work with) have never been big fans of using windows…even more so as a server. Which is why we have 3. 1 primary and 2 backups. I suppose the odds of all 3 blue screening at the same is slim.

So where does the bullet dodging come in? Active directory likes to be able to dynamically change DNS entries. I wasn’t familiar with how to do that in BIND, and while clicking all the next buttons involved in installing Win 2003 and the Active Directory, it has a pretty little radio button that says “hey there…if you want, I could install microsoft dns! you’ll be all set!” It was a pretty radio button and it almost lured me in, but thankfully I looked on google and found out that it’s actually one stupid line that needs to be added to the BIND config.

So I just made a new zone file for windows to play around in without taking over everything like it was SkyNet.

Knowing that I at least am not now using Microsoft DNS means that is one less cold shower I need to take this weekend. The stench of windows is everywhere, and if the testing of this other product goes well, we’ll have a pretty little PAM module installed on all our Linux and Solaris boxes that will make everything authenticate off of the active directory. Group and system policies included.

On the bad side, I just sold my soul to the devil. On the good side, having there be 1 account for virtually all the internally maintained systems the company uses would be nice.

At least nothing on or around my desk, or even have to log into begins with a lowercase i.

thats when I just have to throw in the towel.

Sun is out to get me, and God told them to do it

After mucking around with it for 3 days off and on, I come into work 2 hours early today to get a head start on getting the Sun Java Enterprise Server (with LDAP/Messaging support) running and populated so that my work can finally move off of NIS/40 other authentication systems.

I have it to the point where all that is left is to run the various post-deployment configuration scripts and steps, which I find odd in the first place. Why are their configuration steps that you have to do after you finish the configuration? what is the point of having a configuration wizard with a product if after you complete using it, the wizard then says “yeah, uh, you still have things to do….I dont know what, but there is stuff, and it is in document 819-2328.”

The fun part is that document 819-2328 is on suns docs.sun.com website. Which gives the good ‘ol

Server Error

This server has encountered an internal error which prevents it from fulfilling your request. The most likely cause is a misconfiguration. Please ask the administrator to look for messages in the server’s error log.

message. That doesn’t look like post deployment instructions to me. You know, I always thought that generic 500 error was stupid. So I am supposed to just go and contact “the administrator” at Sun? I am sure Sun only has one administrator..just one. Not only that, but I am sure he is just sitting at his desk…twiddling his fingers just waiting for the phone to ring for me to say “hey, uhh…your website is down…your probably didn’t get 500 calls, pages and emails about it, but yeah…I just wanted you to know. Could you get it back up soon?”
Someone out there really does not like me. It must be because I didn’t pay much attention to ash wednesday. Now god is smiting me.

I think that this is what happened to the server:

melted computer

On a related note, aside from the massiveness of the entire Java Enterprise Server system, it actually is fairly cool. The web mail client that comes with the messaging server is not the best thing in the world, but it is fairly decent, and adding info to the LDAP directory with their java interface is beyond easy. I don’t know why it took around 4 years for us to finally set one up. I guess it is probably because of the 300 other projects that are always going on.

[tags]Sun,JES,LDAP,Messaging Server,Java Enterprise Server,Solaris,web mail,500 errors[/tags]

good ‘ol qmail

I decided I would start being more “vigilant” in finding some consulting work, and I came a fairly easy one that basically involves installing qmail.

That will be in another post because I have a bunch to say about consulting work and the likes.

Anyway, I was looking through the Makefile for qmail, and saw line and it made me laugh:

OS_SPECIFIC=#-DSOLARIS_STUPIDITY
yes, this entire post was disjointed, ran all over the place, and probably has spelling errors, but that is what makes me special.

[tags]qmail,consulting[/tags]

Mail spool full on a Postfix mail server

The default maximum [tag]mail spool[/tag] size in [tag]postfix[/tag] is 51200000 bytes. Since I get a trillion emails a day and only delete the small amount of spam that sneaks through my works spiffy mail filter, I apparantly went over the limit today. That is really sad considering I almost emptied my inbox maybe 2 months ago. I guess that goes to show that noone at my work uses the ticketing system like they are supposed to. Which is why I like them all so much.

Anyway, if you want to change the limit to a differant value, you just have to change one line in the postfix config and restart postfix.

If you have some version of the default postfix config, edit the main.cf and look for this line:

virtual_mailbox_limit = 51200000
if it isn’t there, just add it to the bottom, that way in case you missed it in the config, postfix will read your new entry last and overwrite the pre-defined value that you missed. If it is there, make sure its not defined at later on in the config as well, because then your changes won’t make a difference

Change it to whatever you want the new max queue size to be. I decided an extra 50 megs would be good, so mine looks like this:

mailbox_size_limit = 151200000

Then save the file and type:

postfix reload

now you’re all set, and you can now laugh at all the stupid other people at work that are stuck using Lotus Notes and have to purge their mailbox every 30 days before it gets deleted….or “archived”

how to run a dedicated server on the cheap

Since I don’t have much spare money, and my site, along with the other sites I host don’t generate much income. Here is a guide on how to get your own dedicated server, reliable mail delivery, and reliable DNS service.

Getting the server:
There are a million differant server hosting companies out there, and a bunch of differant setups.

  1. Virtual Servers: These are the cheapest, but have many disadvantages. You really don’t have your own server. You basically get an account on one big server that is sectioned off into a bunch of smaller servers. Each user has full control over their own section, but there are still plenty of ways where one users could inadvertantly effect the performance/reliability of your little server
  2. Co-Located Servers: This gives you the most amount of control out of all the options. In this setup, you build/buy/steal your own server, and mail or deliver it to the hosting provider. They then attach it to their network, charge you a monthly fee, and then leave you alone.
  3. Dedicated Servers: This is what I have, and in my opinion is the best bet for your average poor person, like me. This is basically like leasing a car. It’s your for as long as you want to pay for it, but as soon as you decide you don’t want it anymore, you’re left with no server to claim as your own. But other than that, a Co-Located and a Dedicated server are both the same.
  4. Guerilla Hosting: This is where you take whatever computer you can find, and hide it somewhere in your works office or data center, and leach off of their network. This is the cheapest option, but it has its obvious implications

Where to install the server:

So now that you have your method of hosting your server picked out, now you have to find a place to put it. If you go to google and type in: dedicated server hosting (or whatever hosting choice you decided on) you will see there are hundreds of companies to choose from. All of the companies have their own advantages or disadvantages, but in my opinion at least, the biggest factors are

  • price
  • network perfomance
  • specs on the server they give you,

So really who you choose is all up to you, but out of my latest search since I moved to my current provider, I looked through 30 or so differant companies, and settled on this on place that is based out of Germany. 1paket.com They are a real simple company. They do one thing, which is rent out dedicated servers, they have been extremely responsive about any problems I have had, and they had my new server up and running in less than a day.

Also, they only charge $75/month, which is pretty cheap.

Now on to the good stuff. Saving your ass

So now you have your server hosted somewhere, and you started setting it up. This brings up to the next 2 important things.

DNS: Unless you just want your server to only be accessible by its IP address, you’re going to need a reliable DNS setup. The first thing you need to decide on is who to make your primary name server.

  1. Do it yourself – You can just install BIND on your new server and call it a day, but this has 2 big disadvantages. It is another service running on the same machine, which in the end is just another point of failure. If the machine goes down, any backup mail setup or anything like that goes out the window, you’re gone until you get everything back up. The second downside is that your introducing another hole for someone to sneak through and break into your system. Since there are other options available, it seems like that isn’t a decent trade off.
  2. Pay for DNS – who wants to do that? on the other hand, you’re paying them to make sure their DNS setup always works, which might work out good
  3. Use a web based DNS provider – Most of the dirt cheap domain name registrys these days offer DNS for free. Sometimes you don’t even need to buy a domain from them, but if you still need to buy one, it might not be a bad idea to get free hosting. I know mydomain.com does this, but I have had some problems with the reliability of their network, and the procedures they use to transfer domains. The site I currently use is XName. They offer completely free primary and backup DNS service, you can manage as many domains as you want, and they provide you with a company of the BIND config that you use, which makes restoring lost changes or even moving your DNS server elsewhere extremely easy, and as I said, they are free. However they will happily accept PayPal donations, and I strongly recommend taking that option. Their setup is better than most of the web based services I have seen, including ones that you pay for. Then get secondary DNS hosting somewhere else. This way, even if XName goes down, your domains will still resolve because your secondary DNS provider is still up and running somewhere else. RollerNet is another great company. They offer mail and backup DNS services for completely free, and if you send them $30, they give you a bunch of extra features as well.

Mail: Just like everything else, there are plenty of web based providers, but most of them cost money, and most of them don’t offer the large range of features that I need. So in my opinion the best setup is to host all your mail services locally, using Postfix or something similar, and then setup a backup MX record that points to a provider that will hold the mail until your server comes back up in the event of a crash or network outage. For this you should go back to the company I mentioned earlier…RollerNet. They offer a ton of mail serving features, including store and forward, which lets you make them your primary and backup mail host, and they deliver the mail to whatever mail server you tell them to, making your real mail server hidden from the public. Spam control, DNSBL, SPF, along with a ton of other features.

So, you should go check out:

1paket.com – extra cheap and reliably server hosting

RollerNet – reliable free mail and free DNS hosting

XName – reliable free DNS hosting

Is MS Windows ready for the desktop?

Read this:

Is Microsoft Windows ready for the desktop?

This is a real funny article. Looking past all the sarcasm and humor, sadly it’s all true. Especially the part about “Non-Voluntary Contributions” or “NVC” for short.

whois and traceroute suck. WhoB, and LFT are where the party is

Last night I was trying to track down why all these odd HTTP requests were going to a server I am working on. It looked like the server got listed on some web proxy list or something, because basically every request that came in was in the form of

GET http://randomsitename.com

What was even more weird was that every once of those crazy requests was for either a random little search engine, or a bunch of popular 3rd party ad servers.

Either way, the end result was that I had about 280 IP addresses that all these requests were from, and I was trying to find some kind of link to why all these IP’s were sending requests to this one random server that hasn’t even been put into production yet.

looking at whois output gets real boring after a while, plus most whois clients don’t handle bulk processing very well, and I wasn’t really interested in sitting around and either manually running whois queries on 280 IP’s or staring at the output of all those whois records going by.

Then I found this little tool called WhoB. WhoB is a really handy little command line whois client that is designed to product all its output on 1 pipe delimited line, which makes it really easy to use with grep or awk. Also, WhoB uses a variety of sources to get its data. It primarily looks up information derived from the global internet routing table, as opposed to the standard whois client, which sucks unless you specify which whois database to use (and you need to know its address), which makes things really inconvenient if the addresses you are researching are scattered internationally.

You can look WhoB manual on how to use it, by just typing this line:

for ii in `cat fulllist`; do whob -o $ii;sleep 10; done|tee ./whoisoutput

I was able to save all the output of the file, watch the results scroll by in the meantime, and have some nice easily grepable output, which after it has finished, told me that all the requests were from 2 very large networks in China. Also, in case you were wondering, I added the “sleep 10” line because the ARIN database apparently cut me off because I was querying it at least once a second, and apparently they don’t like that.

Here is a sample of the output:

222.79.29.118 | origin-as 4134 (222.76.0.0/14) | CHINANET fujian province network

The -o option tells WhoB to display the organization name on file at whatever registrar for who owns that IP.

Also, WhoB comes in the same package as another really useful tool that I found last night as well called LFT. LFT is …

short for Layer Four Traceroute, is a sort of ‘traceroute’ that often works much faster (than the commonly-used Van Jacobson method) and goes through many configurations of packet-filter based firewalls. More importantly, LFT implements numerous other features including AS number lookups through several reliable sources, loose source routing, netblock name lookups, et al. What makes LFT unique? Rather than launching UDP probes in an attempt to elicit ICMP “TTL exceeded” from hosts in the path, LFT accomplishes substantively the same effect using TCP SYN or FIN probes. Then, LFT listens for “TTL exceeded” messages, TCP RST (reset), and various other interesting heuristics from firewalls or other gateways in the path. LFT also distinguishes between TCP-based protocols (source and destination), which make its statistics slightly more realistic, and gives a savvy user the ability to trace protocol routes, not just layer-3 (IP) hops.

LFT it a lot more useful than the normal traceroute command, I won’t say it actually ran any faster though.

Also, LFT/WhoB is available as a package in debian. If you’re using Ubuntu, you need to tell the package manager to use the “universe” package database, otherwise you will have to go to the LFT/WhoB website and download the debian package from there.