2

I recently became aware of a problem where we had a proxy configuration issue that resulted in slow performance for users browsing websites. Most of our IT folks have a slightly different config due to the way we access dev & test environments, so we ended up getting a bunch of vague "the internet is slow!" complaints before fixing it. A few months ago we had an problem where a bug in an application killed performance on many PCs... but we had a very difficult time detecting it.

This issue is bugging me, because it was something that we totally could have addressed proactively. The issue is that we have no instrumentation to know that it usually takes 5 seconds or 5 minutes to run through tasks that our users do every day.

Does anyone out there know of a free/cheap tool that would allow us to script something like this:

  1. Load Internet Explorer, time the application start
  2. Go to google.com, time the page load
  3. Go to example.com, time the page load
  4. Close browser

I'd like to be able to have a script do something like this every 15 minutes to run develop a baseline and figure out what "slow" means for users. The internet is just one example, I'd see this being useful for in-house and other applications as well.

duffbeer703
  • 20,077
  • 4
  • 30
  • 39

4 Answers4

1

In my opinion applications themselves should support monitoring these kinds of metrics with any standard monitoring suite including setting the default warning thresholds ^^ However, most applications don't do that I guess with a few notable exceptions like Exchange with System Center Operations Manager and so on...

...in this case I'd look at it more like a user and usability study problem. Doing over-the-shoulder testing of user workflows regularly would be a useful start even though it's not automated.

Applications killing performance on clients could be caught with proper performance monitoring, though it needs to include all kinds of metrics that can slow a PC to a crawl like cpu and memory load, disk and network I/O load and pattern and so fourth - just like with server monitoring.

I understand the dev and test environment access but I'm a strong proponent of having at least the first line support guys on identical standard images, network and so fourth as the rest of the users - if this is impossible to implement for everyone in the department.

Using remote management servers/multi-user workstations for day-to-day admin work is an easy way to not have to rely on the local pc being set up in a specific way or with specific tools.

Oskar Duveborn
  • 10,740
  • 3
  • 32
  • 48
  • I agree, those are some good ideas. Using identical configs is also challenging for us because we support customers with very different configurations. Our server-side monitoring definitely needs improvement, I was just hoping to drive some improvements from the client side! – duffbeer703 Jan 07 '10 at 16:18
  • +1, most definitely agree. It's the main reason why I'm so adamant about standardization. Having the same system/processes/apps/configurations as your staff, allows you to have a better view/feel at how things are going in the user environment. At work we have two environments, one for techs, one for users. We use the 'user environment' (Office, email, web-browsing, other standard apps) until we need to do any kind of administration. Of course that means two accounts per tech, but we admins should be able to handle that, right? – l0c0b0x Jan 07 '10 at 17:29
  • @duffbeer703 (great name by the way), I doubt configurations would be such that it wouldn't allow having a mixed environment where 'most' of the apps would be standardized. That means, standard installations and configurations across the board. – l0c0b0x Jan 07 '10 at 17:36
1

I really like the idea of monitoring for slowness before users notice anything.

I would try to tie it in to whatever monitoring software you're already using (Nagios, etc) for convenience.

The Cucumber framework looks interesting - http://cukes.info/ and there's a Nagios plugin for it. (Google "Cucumber-Nagios")

You could also script Internet Explorer with Powershell or another scripting language. I always found that more cumbersome, though.

Jason
  • 11
  • 1
0

If you need to know the performance of your website and would like to diagnose the problems later, then you need network monitor software.You know know more about network monitor by

For free,wireshark is good. For commercial usage,Capsa is suitable.

0

Have you looked at HP SitScope? Not only will it pull your system information, including potentially SNMP information from your proxy server, but you can run application performance monitoring scripts for web. This application sampling technology is a shared technology component with HP LoadRunner and HP Business Availability Center, which is essentially a GUI-less browser completing the scripted tasks. Alerting and reporting is built into SiteScope.

Something that you could do on a scripted basis as well would be to take a look at using CURL with some timers around the beginning and ending of the request. The PCODE might look something like this

  1. GET START TIME TO MILLISECONDS
  2. ISSUE REQUEST FOR PAGE DOWNLOAD USING CURL
  3. GET END TIME TO MILLISECONDS
  4. SUBTRACT START TIME FROM END TIME FOR REQUEST TIME
  5. PUSH REQUEST TIME TO COLLECTION SERVER
  6. IF REQUEST TIME EXCEEDS SLA VALUE THEN SEND NOTIFICATION EMAIL TO Group

You could easily use a chron task to run the above every fifteen minutes or so. Use your favorite scripting language to execute the operation.

James Pulley
  • 456
  • 2
  • 6