Kurt McKee

lessons learned in production

Archive

Hey there! This article was written in 2012.

It might not have aged well for any number of reasons, so keep that in mind when reading (or clicking outgoing links!).

Getting back into it (part 3)

Posted 27 October 2012 in automation, feedparser, and software

This post is about software development, but I'm disappointed to say it's not about feedparser or listparser development.

I'm back to working 14+ hour days, and much of my time has been spent writing automation scripts in a custom scripting language that can be interpreted by Tera Term. It has a featureset that makes what I'm doing fairly easy, but it lacks niceties that I'm accustomed to. For instance, everything is global. No, everything. There is no scope. Loops have break but not continue. Subroutines exist but not functions (no arguments, no return values...probably because it has no scope). Nevertheless, I've been able to accomplish a great deal with it.

Now I'm tackling a new problem: browser automation. I frequently work with HTTP-based interfaces, and it's pretty tedious. At first I thought "I need simplicity. I'll just use iMacros." Then I tried it and slapped myself, because while it seemed very easy to use it couldn't be used as a part of a larger script. So I installed Selenium. By the time I left work today I had some promising results, and I expect that I may have a great example by the end of tomorrow.

The biggest problem for me will be navigating through the stupid web interfaces of third-party vendors. Those guys lurvs their stupid frames, their Internet Explorer-only Javascript, and their Flash-based login screens that look like they're sitting on the edge of still waters that reflect what you're typing. Thank God for search engines, because I would never have overcome the simple issues I ran into today.

It's made me hungry to get another feedparser release out the door!

☕ Like my work? I accept tips!