Sunday, November 23, 2008

Credit Card Finance Charge Tidbit

Okay, as some following this blog (if anyone is) may know, I'm not just doing internal server operations at Tacit Knowledge any more, I'm also the CFO, so naturally I do a lot of finance.

In order to want to take on a role like that, and to pursue excellence in it, you've got to have more than a little bit of interest in even the most mundane parts of finance. You have to want to truly understand the full workings of the little things or you've got no chance when the big things confront you.

With that as motivation, I'll mention that for my entire adult life I have engaged in a campaign against credit card finance charges. This may seem pointless, and I'll admit it is - on the "dollar value" side, it is not really worth a lot to achieve a state of "no finance charges ever" because even getting close to that state means there's not enough dollar value left in charges to justify getting to zero. But where's the fun in that? Let's get to zero.

The first thing I learned in this campaign was that the credit card companies use several strategies to make it nearly impossible. The first of these strategies is the non-fixed statement date. In other words, they do not align their statement dates with any fixed calendar day so you can never assume that if you make a full payment by the 1st of the month every month, you won't get a finance charge.

They move the date around every month, so every once in a while, a full statement period will be between and not inclusive of the two 1st dates, and they'll get you. How sneaky! That honestly took me a couple finance charges to figure out, but is easily defeated with a "semi-monthly payment plan" though - just send them two payments a month, on the 1st and 15th. Please note that if you don't have some automated way to make this extremely easy you're wasting your time and it is not worth it. I use Quicken with bill pay, personally.

Okay, so having done that, I thought I'd essentially won the game and needed to move on to other venues to exorcise the daemons of my financial obsessive compulsiveness.

Most of their other strategies revolve around keeping you in debt once you've gotten there, and since that's a problem solved with fiscal discipline (a much larger subject than tactical credit card payment plans) I'm not going to go into it here. You need Suze Orman or a Rich Dad, Poor Dad book, Quicken, and some soul-searching if you're in that situation (highly recommended if you are, though).

Then I just got hit with two finance charges in two consecutive months, and the game was back on. I simply couldn't understand what happened since I sent them more then the previous balance was.

I'm sure this is no surprise to some of you but for those that did not know it, I'll mention another subtle trick they play on you. If you *ever* take advantage of the ability to defer payment, as I did two months ago, you will obviously incur a finance charge. I'm not against that, I will pay to rent cash sometimes, and that month I did so.

But what you've done is you have established a new class of balance in your account - a "revolving balance". It's important to realize that this class of balance is treated very differently from "new charges" because now when you send payments in - even two a month! even more than all new charges! - all of the payments will be applied to "new charges" *first*, and only the excess will apply to the revolving balance.

Sneaky again! In this way, you may find yourself making multiple payments a month, for *more* than the sum of your new charges every month, and still owe a finance charge every month as they charge you to float whatever is left of the revolving bit from the past. Seems unfair to me, but hey, they're in the cartel of payment processing networks and I'm not, so I don't get a vote. I do have a new compensating strategy though, and will restate my "never get a finance charge" rules like so:

- make two payments a month, on the 1st and 15th, for whatever the balance is on those days
- if you ever establish a "revolving balance", make sure to establish a credit balance with the account with the next payment - compensating for any charges you may incur between your sending the semi-monthly payment that would otherwise make you run a balance

Now I think I've "got" them. We'll see.

Some will argue I should quit fussing and use the "AutoPay" option, but then if you actually wanted to rent cash for a month, you'd have to pick up the phone to do it or they'd pull the full balance from your checking account anyway. So that option gives you a low score on the "ability to control" things scale. My phone calls to Citibank to get the finance charges waived while I learn these lessons :-) average around 15 minutes, so as a solution design with AutoPay I'd be losing money on the opportunity cost of my hourly billable time.

This concludes your obsessively compulsive finance lesson for the day.

Cheers!

Wednesday, October 22, 2008

New Macbook Pros underwhelming

Only new features I was able to discern:

- larger hard drive
- faster RAM (but limited to 4GB according to them, and there are no 1x4GB 1067MHz parts that I'm aware of)
- stiffer chassis

All the other stuff looked not-that-interesting

I'm especially bummed that they upped the speed of the RAM since that means our new 6GB strategy won't work as there are no parts, and they didn't introduce a 17" one that I can see.

But they're heavier at least!

3rd Gen and 4th Gen 17" MBPs are still the best bet for us I think, with 6GB in them.

Any one else have thoughts?

This is possibly material as we have enough people now that hardware refresh is always going on, so we're always potentially in the market for a laptop or two.

Thursday, September 25, 2008

Bose QuietComfort2 headphones are quite nice

This post is a bit of shilling.

First let me say that I know the QuietComfort2 headphones are nice because I've had a pair for around three years. I work with them on frequently, I travel with them all the time, I use the heck out of them basically and they're much loved.

But they broke on a recent trip, and that reminded me of another feature I like about them: support. Bose replaced them for me (3 year old headphones!!) with a new pair for $50 ($50!!). If you haven't bought them because they cost a lot (and they *do* cost a lot) you should know that you're going to get a set of headphones that will last and last, and if they don't, you'll be able to get a new pair without taking out a loan

Apple Bluetooth Not Available fix

A lot of people have experienced the dreaded "Bluetooth Not Available" problem on their Apple machines, and I just recently had it happen to me.

I've seen it before, and just rebooted to resolve it but this time I researched more.

Some people report that resetting the SMC will fix it. Some report it's a temperature thing. Odds are it can be many things, so this fix may not work for you.

One forum post reported that they hadn't noticed it, but they were using VMWare, and one of their VMs had taken control of the bluetooth device - simply unhooking it from the VM allowed it to be used in OS X again.

Imagine my chagrin when I noticed that was what had happened to me...so if you get this message and use VMWare, check your VMs to see if one of them grabbed the device. Disconnecting it will get you going again.

MacBook Pro rev3+ works with 6GB ram (1x4GB SODIMM, 1x2GB SODIMM)

1x4GB SODIMM RAM sticks just came out, and they're actually not that expensive ($179 @ NewEgg)

Read on a zdnet review site that a Rev3 or higher MacBook Pro would boot with 2x4GB sticks, but not work well

An enterprising forum member on some Australian tech site defied someone else's unsubstantiated data to show that in a 1x4GB+1x2GB SODIMM configuration it would work, so I ponied up to try it for myself along with a colleague.

Works just fine for me, second day running. Going to kit out the rest of the office with them.

VMs running with 2 CPUs and 3GB of RAM while your host machine is still humming along? Definitely worth $170...

Saturday, August 30, 2008

Build systems, and work flow thoughts

Long time, no post. No worries though, there are new tasks and new thoughts associated with them. Here's a couple.

First is a general problem associated with build systems. For some reason, it appears that dependency resolution and build/package/deploy scripting always get mixed up in build systems. I'm creating one now for a large Java system and the fact is that all the available tools and styles leave me unsatisfied.

A system that is basic might include dependencies persisted into the source tree (libraries etc committed) with ant scripting. This is functional but for systems with multiple modules, like the one I'm working with, it results in either a lack of fine-grained library control, or library clashes, or library redundancies - all of which are maintenance inefficiences. At the ant level you would either end up with a large complicated ant file that is hard to maintain or you would have many sub-build files which may have redundancy

A system that is more "modern" would use maven for dependency resolution and build, but the problem as I see it is that while you eliminate the operational inefficiency associated with library placement you add a large maintenance inefficiency in that you were already using a source control system and now you have a fair bit of work to do to maintain your maven repository as well, in order to back up your dependency declarations. Further, scripting the actual compilation and packaging of your artifacts can be done in maven but is not elegant, to my eyes.

I have heard that ivy+ant can help, but as ivy is built on maven repositories to meet declared dependencies there is still the maintenance cost of maintaining a repository, and scripting the build in ant reduces the problem at minimum to the cost incurred in the basic build system.

Which all leaves me honestly feeling as though from a pure dependency resolution and artifact production perspective that the basic system with libraries in the source tree and ant scripting for artifact production is still the global optimum for build systems. That just can't be the case, can it? I'd love to hear otherwise.

The only saving grace I'm aware of with maven is that there are enough value-adding plugins (e.g. IDE configuration generators, static analysis tools) that maven proponents assert the value of a maven system is large enough globally to overcome the local added costs of maintaining the repository. While I will grant that there are a large number of value-adding plugins, I'm not convinced that the same value couldn't be captured with what I would wager is a simpler system substituting like ant tasks for those maven plugins.

The internal debate goes on, though I am committed to using maven for the system. At the least I will come out of this with a thorough understanding of exactly how maven will work on a large project because we are certainly destined to find out.

One other thing I have an ongoing interest in is the general problem of how to efficiently complete tasks where more than one person could do parts of the task, and in general more than one type of specialized skill must be used to complete the task.

I've been watching articles on lean engineering, agile, scrum and XP flow by for a while, and I think something between a pull-based lean system and agile/scrum models how a highly skilled team works, while introspection on the real things that drive such a system would perhaps formalize (and make more efficient) how this realistic team works. That makes a little formal thought around the ideas useful, and I just recently read a great article that does so here:

http://leansoftwareengineering.com/ksse/feature-brigade/

The idea I see there is a generalization of a "bucket brigade" work team style to any production need, and then a refocus of that style to a specific application of software feature development, mapping in the lean and agile processes where necessary to communicate the idea. It appears it might work.

One thing I'm curious about though is that it appears the work style counts on the links in a chain being fixed - e.g. this one type of skill (or skill overlap) is always needed for this stream of tasks. In reality a stream of tasks is typically much less uniform and contains individual tasks that need a variety of skills (or skill overlaps) - never quite the same set twice in a row. I'm not sure how you would handle that, or even if it is handleable in a general work flow design like that.

Perhaps for each unique set of skills you'd have a different feature brigade, and just hope that in reality there was a finite number of linkages required and that exercising them didn't result in over-utilization of a resource shared between multiple brigades? Unfortunately, this seems to match reality frequently as well. It shouldn't be too hard to reduce the multiple chains to a single slightly branching chain though and still get the inventory/capacity alarms that a kanban board will give you while maintaining flexibility around the skills required for a given task.

Either way, if there is potential to have smooth high-velocity feature development it should be examined, and this is article definitely has some ideas I'd like to incorporate.

Cheers-
-Mike

Wednesday, May 21, 2008

If you upgrade JDKs, don't forget the security portions

Ran into this today at work - the goal was to upgrade the JDK / JVM on our servers to a newer version with bugfixes for some issues the servers have.

We merrily got the new JDK and slowly promoted it out from dev to qa to staging and then to two servers in prod.

Naturally, there was a problem that wasn't discovered until we got to prod, but on testing other environments was present everywhere. What was the problem? there was a security exception preventing the server from accessing an https URL now over SSL.

Huh? Well, Java is very very strict about SSL / HTTPS connections, so if you try to use a Java program to connect to a server over SSL using an HTTPS URL, and that server doesn't have an SSL certificate that is signed by a CA in your JDK cacerts file, you'll have problems. The normal solution for this is to use keytool to add the CA to the JDK's cacerts file, or to make a keystore using keytool with the certificate in it.

Additionally, it is possible that the java.policy file itself has been changed to grant (or revoke) certain permissions from the JDK.

Both of these files live inside the JDK, so if you upgrade the JDK but forget to copy those files over (like we did) you're going to have some problems.

Finally, for very very old JDKs (1.3.1 and older, pity me) note that you will have other issues if you don't download the JCE and JSSE since without those, the old JDK can't do strong crypto and you'll have trouble too.

In order to discover whether you need to care about this or not, you should like at JAVA_HOME/jre/lib/ext (the JSSE and JCE files go in there) and JAVA_HOME/jre/lib/security/ to see if any files have been updated. Or you can use something like a "find . -mtime " to see what's not stock.

Good luck...

Thursday, May 15, 2008

JDK GC tuning for app servers

A quickie post here.

Lets say you get called in to review a server that's getting OutOfMemoryErrors

This is a bad thing, obviously! How do you fix it?

Well, it depends on the root cause, first of all you have to figure out what the problem is.

On old JVMs you turn on -verbose:gc and that's it, on newer ones you can turn on more details, some timestamps etc - all things that help you, but that's not the point of the post.

The point of this post is to send you over to get GCViewer from tagtraum at http://www.tagtraum.com/gcviewer-download.html

It's really the best option for visualizing what's going on.

It's not perfect though, and a trick I have to use nearly every time I'm visualizing the GC behavior in an app server via GCViewer, I have to filter the log file so it has nothing but GC lines:

cat $log | grep '\[' | grep GC > clean-gc.log

Then you fire GCViewer up and see what you've got

usually you get a line going "up and to the right" which is great for finance, but bad for GC signatures. That means you have a memory leak, or you have misconfigured your server to hold on to too many objects in caches, or to accept too many sessions that themselves have too many objects.

Sometimes you realize that you never run out of heap, but if you still get OOM errors, you are either running out of PermGen space or in rare cases are running out of operating system file handles by spawning too many threads (that error gets mapped to OOM, believe it or not)

Without GCViewer, it's difficult to clearly articulate the problem though, and also hard to see trends. Combined with how fast it is to use, I end up reaching for this particular utility every time.

Cheers

Monday, April 28, 2008

Installing windows on a laptop that had linux

At my office, most of the people use macs, and of the people that don't, most use windows but some use linux.

I don't really care what people use, but when someone wants a windows laptop and the only available hardware in our inventory used to have linux on it, I need to pop in the old windows installation cd and get windows installed, right?

Problem is, the windows installer CD will not install by default onto a hard disk that has linux on it.

Why not? I have no idea. I'm fine with it obliterating all the data - I want it to actually. But it will say "Setup is investigating your something or other..." then get stuck on a black screen. The installer doesn't really start.

The solution? Boot the machine with some linux install cd (or something similar that you have handy) and zero out the first part of the hard disk. The first 800MB did it for me (it did that before I could even hit ctrl-c, probably just a few MB is enough). It is not enough to just re-partition the drive without zeroing out some data. (specifically I popped a fedora core install CD into the drive, typed "linux rescue" at the boot prompt, told it to skip networking and skip finding linux installs, then at the root prompt said 'dd if=/dev/zero of=/dev/hda bs=1M', let it run for a second, then hit ctrl-c, then when the prompt came back, said 'exit' and rebooted)

After that, you can use the windows installation cd with no issues.

Wednesday, April 16, 2008

$PATH issues on windows

If you're busy perl-scripting on a windows machine, and you're getting unexpected behavior from the command-line utilities your shelling out to (say, 'find', or something), it may be because instead of getting the utility you think you're getting (/usr/bin/find, for example, from cygwin) you're getting something totally different (dos find, for example)

This can lead to a lot of confusion when you try to figure out why you aren't getting consistent results between systems you're working on.

Solution? Use full paths. If you want the cygwin find, say "/usr/bin/find" or you just can't be sure.

Wednesday, March 26, 2008

Beware DOS line endings with Perl open command in cygwin

Just a quick note, if you are opening files in cygwin perl on windows, you need to beware the difference between line endings on unix and windows (CR vs CRLF)

You can either twiddle with what constitutes a line-break in perl or dos2unix the file, but if you do neither you'll get unexpected results

Friday, February 15, 2008

Patterns of deployment

This link just flew across the transom, and since I'm think in automated deployment on most projects I'm on I thought it was too good not to pass on.

It's called "Patterns of deployment" and the best part I think is the info about host-specific / environment stuff. That's always a tough nut to crack

Windows locks files, I unlock them

Still dealing with a recalcitrant windows environment where I'm trying to automate the deployment of a ColdFusion application.

Problem is, even after you stop the ColdFusion servers (and you ensure they're down via tasklist.exe inspection), some files are still locked.

It appears the only way to unlock them is to reboot the machine. (yes, I know this is abhorrent, believe me, I'd avoid it)

The way to do this easiest (I think) is to use the 'sc' Windows builtin to set the services to start manually (instead of automatically) (e.g. 'sc config "" start= demand')


Then reboot the machine, and check remotely until the machine's uptime is recent (via something like 'net statistics srvr|grep since 2>&1') via ssh, then parsing (m/\D+(\d+)\D(\d+)\D(\d+)\s+(\d+):(\d+)\s+(AM|PM)/) out the uptime (using Time::Local, and being careful if you don't get what you expect since that machines the machine is rebooting so the connection failed).

Once it's rebooted, you should be good to go, then just sc to set the start back to "auto" and you're done.

Easy (ha)

cygwin messes up windows permissions, cacl fixes them

I recently had the misfortune to work in a windows environment via cygwin+openssh

This is a seemingly civilized configuration - to my unix-biased way of thinking - as cygwin gives me a shell and ssh is a very familiar way to access machines for logins and automation.

Unfortunately, Windows has a very complex permissions scheme (ACLs) and cygwin+openssh+scp just won't honor Windows permissions. No matter how I tried, the files would land on the server without properly inheriting the permissions of the directories they landed in.

Luckily, Windows provides a relatively easy way to fix this problem in the utility cacls.

What I ended up doing was using cacls with the '/S' switch to get the string representation of the parent directory of the location I copied files to, then used the '/S:' switch again, this time with the string I got from the first invocation, but now targeted at the actual directory I was working in.

You can use the nifty "AccessEnum" utility (google it) to verify your permissions a lot more quickly than a ton of right-click/sharing-tab examinations.

Poof, perfectly consistent permissions.

I'll continue choosing unix for my servers though, thank you...