Just a quick note for anyone who hasn’t heard yet – I’ll be speaking at Apple Rock on Tuesday, October 11. Location is Pulaski Academy’s Murphy Theater. The topic is mobile development compared to the desktop. If you’re in central Arkansas, please stop by and say hi.
October 4, 2011
June 25, 2011
Wanted to let folks know I’m speaking at CocoaConf, Columbus OH, August 12-13. There’s a pretty nice lineup of speakers and sessions. Mine about debugging and performance tuning. Sexy and exciting topics to be sure.
For folks in/near the DC area, iOSDevCampDC is happening that same weekend. I wish I could be split in two, because they’ve got some good speakers as well.
June 14, 2011
May 6, 2011
I stumbled across this little tutorial I wrote back in the mists of time, probably around 1996 or 1997. And it was based on a tutorial I wrote at Visix, probably in 1993 during one of our Optimization Parties. It describes how to read the output of gprof, a profiling tool available on most unix systems. It’s even still there on Mac OS X. So you kids with your fancy Shark and Instruments, here’s what some of your elders used.
gprof is not a GNU tool, even though it has the leading “g”. That “g” probably stands for “call Graph” profiler. You’ll need to check your system’s documentation (e.g.
man gprof) for exact instructions on getting gprof to work, but usually it just involves compiling and linking with
-pg, running your program, and doing
gprof gmon.out > oopack.
Here’s a 300K sample of output from gprof on the Dec Alpha if you want to take a look at it. This particular report is from a run of AOLServer 2.2.1 which involved fetching index.html 53,623 times. The links that follow go to anchors in that 300K sample. What was I wanting to profile? I wanted a gut check to make sure that life in the server was sane, and if there were any obvious bottlenecks that maybe I could address if I had the time. The test was to fetch index.html over and over again. In this case, around 53,000 times
There’s 4 parts to gprof output:
- Built-in documentation: Short form of everything here, and more.
- Call-graph: Each function, who called it, whom it called, and how many times said calling happened.
- Flat profile How many times each function got called, the total times involved, sorted by time consumed.
- Index: Cross-reference of function names and gprof identification numbers numbers.
I go to the flat profile section when I first start looking at gprof output. The big time consumers are usually pretty obvious. You’ll notice that each function has a [number] after it. You can search on that number throughout the file to see who called that function and what functions that function calls. Emacs incremental search is really nice for bouncing around the file like this.
Here you can see that DString is a big time gobbler:
% cumulative self self total time seconds seconds calls ms/call ms/call name 17.7 3.72 3.72 13786208 0.00 0.00 Ns_DStringNAppend  6.1 5.00 1.28 107276 0.01 0.03 MakePath  2.9 5.60 0.60 1555972 0.00 0.00 Ns_DStringFree  2.7 6.18 0.58 1555965 0.00 0.00 Ns_DStringInit  2.3 6.67 0.49 1507858 0.00 0.00 ns_realloc 
Out of 21.05 seconds of total clock time, Ns_DStringNAppend consumed about 4 seconds, or about 18% of the time in and of itself. It was called 13 million times.
MakePath consumed one and a half seconds itself, and its children consumed three and a half seconds. At least one individual call to this consumed 0.01, and at least one individual call took a total of 0.03 seconds in MakePath and its children.
Handy tip – the function numbers in brackets are approximately sorted by time consumption, so a function with a [low number] will generally be more interesting than one with a [high number].
Now that you know that Ns_DStringNAppend is called a bunch of times, this could be a useful target for optimization, I’d look at its entry in the call graph section.
Before doing that, just for illustration, take a look at AllocateCa  since it has all of the interesting pieces of the call graph in a more compact size:
0.04 0.18 53622/160866 Ns_CacheNewEntry  0.04 0.18 53622/160866 Ns_CacheDoStat  0.04 0.18 53622/160866 Ns_CacheLockURL   3.0 0.11 0.53 160866 AllocateCa  0.16 0.17 160866/321890 Ns_DStringVarAppend  0.06 0.00 160866/1555972 Ns_DStringFree  0.06 0.00 160866/1555965 Ns_DStringInit  0.04 0.00 160866/1341534 Ns_LockMutex  0.03 0.00 160866/1341534 Ns_UnlockMutex 
The entries above AllocateCa  are the functions that call AllocateCa. The entries below that are the functions that AllocateCa calls. There are two numbers separated by a slash: the first number is the number of calls that the function has made, while the second number is the total number of invocations of that function.
In other words, for 160866/321890 Ns_DStringVarAppend , AllocateCa called Ns_DStringVarAppend 160866 times. Across all of AOLServer, Ns_DStringVarAppend was called 321890 times.
Similarly, for 53622/160866 Ns_CacheNewEntry , means that Ns_CacheNewEntry called AllocateCa 53622 times, and AllocateCa was called 160866 times total.
So, just by looking at this snippet, you know that the three Ns_Cache functions each call AllocateCa about once per serving of index.html, and that AllocateCa makes a single call to Ns_DStringVarAppend, Ns_DStringFree, etc… each time. What’s also interesting to note is that someone is calling Ns_DStringFree more than Ns_DStringInit. This may be (or may not) be a bug in AOLServer. You can go see Ns_DStringInit and Ns_DStringFree yourself and track down who the culprit is.
The floating “3.0″ on the left is the percent of total time that the function consumed. The two columns of numbers are the amount of time (in seconds) that the function consumed itself (AllocateCa took 0.11 seconds of time total to run its own code) and the amount of time in the function’s children (0.53 seconds were spent in its children)
Getting back to real analysis of DStringNAppend, you can see that MakePath made 50% of the Ns_DStringNAppend calls. Since you know that there were 53623 fetches of index.html, that means that for each page, MakePath was called twice, and for each call to MakePath, Ns_DStringNAppend was called 64 times.
If one call to MakePath could be elided (since it’s getting called twice), or if fewer than 64 Ns_DStringNAppends could be done per call, we could see a performance boost.
Just browsing the gprof output can be an illuminating exercise. If you have a gut feeling that a particular function is a hot spot (say, Ns_LockMutex ), you can see the call graph for that function, see if it’s consuming lots of time, or if it’s being called a whole bunch. Here it was called 1,341,534 times, or about 25 times per page serve. Maybe that’s high. Maybe that’s right. Sometimes a suspected culprit isn’t there, or you find a surprising time waster.
Because this sample gprof output was done on a Dec Alpha system, there was some suckage involved, such as no explicit time recorded for system calls. So we don’t know if, for example, select() blocked for a long time on each call.
April 3, 2011
Mike Ash’s recent Friday Q&A about signals mentioned SIGWINCH, the hearing of which always sends me down memory lane. My first professional bug was centered around SIGWINCH. By “professional bug”, I mean a bug that someone paid me actual money to fix during a period of employment.
I went to work for a company called Visix straight out of college in the early 90′s, which at the time sold a product called Looking Glass, a file browser much like the Macintosh Finder but for Unix. Eventually Looking Glass would become the Caldera Linux desktop. Looking Glass supported the major graphical windowing systems of the time: X11, Intergraph’s Environ V, and Sun’s SunView. The image at the top of this posting is the only screen shot I could find of the version of Looking Glass I worked on running on SunView. Notice the awesome desktop widgets at the top. That was typical SunView style, so Looking Glass was pure awesome eye candy in comparison.
I was hired for the tech support team, and our duties were phone support (typically debugging network configurations and X server font paths) and porting Looking Glass to other platforms. Being the Lo Mein on the totem pole I got given the old platform nobody wanted to touch any more: SunView.
SunOS 4.1.X had just come out, and Looking Glass would hang randomly. It worked fine on 4.0.3. My job was to find and fix this hang. This was my first introduction to a lot of things: C, unix systems, windowing systems, navigating large code bases, conditional compilation, debuggers, vendor documentation that wasn’t from Apple, working in a company, and so on. Luckily the SunView version didn’t sell terribly well any more because everyone was moving to X11, but there were a couple of customers bitten by this problem.
So what is SunView? SunView is a windowing system: different programs run displaying graphical output into a window. Nowadays that’s commonplace, but back when SunView came out it was pretty cool. SunView was one of the earlier windowing systems,so it had a bunch of peculiarities: the biggest was that each window on the screen was represented by an honest-to-god kernel device.
/dev/wnd5 is a window, as would be
/dev/wnd12. There were a finite number of these window devices, so once the system ran out of windows you couldn’t open any more.
There was a definite assumption of “one window to one process” in SunView. Your window was your only playground. Looking Glass was different because it could open multiple windows. Because of the finite number of windows available system-wide, we had to create the alert that said “You can’t open any more windows because you’re out of windows” at launch time, thereby consuming a precious window resource, and hide it offscreen. It was the only way we could reliably tell users why they couldn’t open any more windows. Glad I wasn’t the one that had to make this work in the first place. I was just fixing Legacy Code.
The other peculiarity is that you never got window events. Even in the 1.0 version of the Macintosh toolbox you could easily figure out if the user dragged the window, or resized it, or changed its stacking order. In SunView you just got a signal. SIGWINCH, for WINdow CHange, and hence the memory-lane trigger. The user moved a window? SIGWINCH. The user resized it? SIGWINCH. The user changed the z-order? SIGWINCH.
With just one window that’s not too bad. Just query your only window for its current size. For us, though, we had to cache every window’s location, size, and stacking order. Upon receipt of a SIGWINCH we would walk all of our windows and compare the new values to the cached version. If something interesting changed we would need to do the work of laying out the window’s contents.
So, back to my bug. It took me a solid month to fix. All this time I thought I was a failure and was worried I’d get fired. That would be embarrassing. It took so long to fix because it was part time work in amongst my other responsibilities, and also because it was difficult to reproduce. Spastic clicking and dragging could make it lock up, but not reliably. Using the debugger was pointless – a 4 meg Sun 3/50 swapped for two hours as dbx tried to load Looking Glass. I ended up using a lot of caveman debugging.
The application event architecture we used is shown right up there. Each window had an event queue (remember that one window to one process assumption) that held all of the mouse and keyboard events. Upon receipt of new events (I forget if we got a signal for that, or if some file descriptor became readable), we would walk our windows: read each event, handle it, then move on to the next window.
I was getting some printouts, though, showing an window receiving mouse-downs and mouse-drags, but no mouse-up. Occasionally I would see a mouse-up, with no mouse-downs. Ah-ha! The mouse-up was being delivered to the wrong window’s event queue, probably due to some race condition down in the system that didn’t notice the current window changed during the drag. The fix was easy once I found it : just merge the events from all the windows first, and then process them. Happiness and light.
It was then I learned how expensive malloc is. I malloc’d and free’d event structures, but performance was dog-slow, especially during mouse drags. Caching the structures made life fast again.
Memories like these make me so happy with the cool tech we get to play with these days.
March 31, 2011
A friend recently asked me about my opinions on the Time Capsule. I had the first generation device. It was OK, but slow, and eventually died the death of the power supply.
I have the latest gen now, 2TB, and love it. With 10.6 over a fast network, I don’t notice the hourly backups. One thing I did notice as time went on that the backups were getting kind of big. I want my individual machine backups to be under 1TB so I could archive them to some terrorbyte external drives I already have. I’d exceed that if I backed up too much junk too often.
My main goal for backups is to restore my
$HOME data in the event of a machine failure. I don’t plan on restoring the OS or Applications from the backup. I’ll just use whatever OS is on the replacement machine or install my own, and I’ll install applications as I need them.
Backup Loupe is a great application for looking at your backups and seeing what’s being piggy. A file that’s only 50 megs is not a big deal, but it becomes a bigger deal if it gets touched regularly and gets backed up every hour. Using Backup Loupe, and general foresight, I have built this exclusion list over the last year or so. Unfortunately the list is not in any sane order. I’m not sure what order it’s listed, since it’s not chronological.
Some are pretty obvious:
~/.Trash – no need to backup trash.
~/Library/Caches, those will be re-created by applications.
~/Library/Application Support I do back up since it might have useful goodies. [edit: Mark Aufflick suggests preserving
/Library/Application Support/Adobe. Personally I just use Lightroom and Photoshop CS5. Lightroom is pretty well behaved, and I'll just reinstall Photoshop. But if you had the full Suite, that'd probably be a huge pain].
/Applications, I’ll just redownload and reinstall them.
/Users/bork is a test user I only use for development. No need to back that up.
The various parts peculiar to individual app or companies are there because they’re either big, can be regenerated, or an app touches a file often. Camino is one of them. I don’t use it very often, but every time I do I have to back up 50 megs. So its application support directory is on the chopping block. Similarly, Chrome gets updated every week, and is pretty big.
/Xcode4 are there because I’d fill up the Time Capsule just from Xcode updates. I can always download the latest one if I’m setting up a new machine.
~/junk is a directory I use to throw junk into (hence the name).
NoBackup is a similar directory at the top level. I have one in
Movies too as a place to store one-off iMovie projects. Once I create the final movie the project can go bye-bye, and I usually don’t feel the need to back it up in the interim. I can get the original footage from the camera again. If it’s something larger or more important, I’ll leave it in
~/Movies, which does get backed up.
~/Downloads is another place for stuff I don’t want to delete right now, but won’t cry if it suddenly went away. If I want to keep it, I’ll put it somewhere that’s backed up.
Lightroom generates previews of photographs so that the UI is more responsive. Those can be regenerated later, so they don’t ned to be backed up.
All system files, including
/usr are things that would come with a fresh OS instal. Things in
/usr/local I can re-install as needed. Same with
My music lives on another machine, so I don’t need to back up
I check with Backup Loupe every now and then to make sure there’s not a new suprise that’s getting backed up.
Addendum: courtesy of brad@cynicalpeak, there’s other trash directories,
/Volumes/*/.Trashes if you have multiple disks. Also
/var/folders is yet another cache location.
November 4, 2010
For the folks who had the stamina to sit through my talk about Debugging at today’s MacTech conference sessions, here are some links I mentioned
- What Will We Use, exploring Ubuntu’s bug one.
- Troubleshooters.com, home of the Universal Troubleshooting Process.
- Warnings I turn on, and why by Peter Hosey.
- Debuggers are a Wasteful Timesink
- Command-line processing in Cocoa.
And for the folks who missed it, I believe MacTech will be selling DVDs.
October 29, 2010
July 13, 2010
Ever find yourself wanting a short-term shortcut button for something in an application, especially something buried a couple of levels down in menus? I’ve been using the Help menu search field to essentially pre-cache a menu item for quick access.
Specifically, when I work on the newsletter for my community orchestra, I have all the submitted stories in one Pages™®© document and the final newsletter in another document. I strike out stories as I move them over. I can tell what’s been finished, but I don’t destroy what’s there in case I need to undo or refer to something. There’s no toolbar button that I could find for strikeout, so I just search for ‘strike’ in the menus. Now when I want to strike out some text I just go to the help menu and hit the first useful item.
April 9, 2010
To keep myself from dying too early, I’ve been doing a lot of IndoorCycling (a.k.a. Spinning). Today I forgot my heart rate monitor so I borrowed one from the club. I wanted to record my heart rate (usually my Garmin records it and makes a nice pretty graph). I couldn’t do that, so the next best thing is to sample data once a minute and write it down.
As I was heading to get a clipboard and some paper, I remembered, “I have my ipad with me for random other reasons. I also have Numbers™ I can just type stuff into a spreadsheet.”
And amazingly enough, it worked great.
(and as you can see here, I’m dead. Actually, the loaner strap had a limited range)
September 7, 2009
Yeah, Git and Hg are the new hotness, but for some projects I’m still using subversion.
A also like keeping my project documentation in VoodooPad. By keeping a narrative of development in an ever-growing VP, I can go back and figure out where certain “design” “decisions” came from that are currently causing me problems. Not that it ever happens to me. *cough*.
Anyway, I’m moving into a new 13″ MacBookPro (sweet sweet little machine), and I’m taking the nuke-from-orbit approach: only installing software and data files on-demand. So now I need to be able to commit VoodooPad doc changes. This is complicated by VP docs being a bundle with files coming and going as the file is edited.
Getting it set up correctly isn’t 100% obvious – here’s how I do it:
- Snarf the “Commit changes to subversion” code” and paste it into a new file living at.
~/Library/Application Support/VoodooPad/Script PlugIns/Svn Commit.lua
- Subversion now lives in
/usr/bin, so edit the
/usr/local/binreferences accordingly. Here’s my version:
--[[ VPLanguage = lua VPScriptMenuTitle = Subversion Commit VPEndConfig ]] -- we assume subversion is located in /usr/bin/svn posix.chdir(document:fileName()) -- add new files os.execute("/usr/bin/svn st | " .. "/usr/bin/grep '^\?' | " .. "/usr/bin/sed -e 's/\?[ ]*//g' | " .. "/usr/bin/xargs /usr/bin/svn add") -- clean up deleted pages os.execute("/usr/bin/svn st | " .. "/usr/bin/grep '^\!' | " .. "/usr/bin/sed -e 's/\![ ]*//g' | " .. "/usr/bin/xargs /usr/bin/svn rm") os.execute("/usr/bin/svn ci -m'auto commit'") os.execute("/usr/bin/svn st") vpconsole("Commit complete.")
- Set up your favorite form of passwordless access. I’ve been using ssh’s authorized keys: passwordless access quickie.
- Restart VoodooPad to get the new menu item.
- Make changes, and no longer fear commitment.
January 31, 2009
It’s been awhile since I last used a two-monitor setup. Usually I do all of my work on a 15″ MacBookPro or one of the plastic MacBooks. But I wanted a better monitor for the desktop when I’m doing photostuff, so now I have two monitors again. Last time was in 2002 when I was doing contracting, and the client’s product I was working on wouldn’t fit on a laptop screen. I used a secondary monitor for running the software.
Even back in the Mac II days I always got really annoyed with the “traditional” way of setting up multiple monitors: having it so the desktop areas had large coincidental areas of vertical or horizontal border, so you could have one window span both screens and have it look non-horrible. My problem was I would always overshoot one monitor and end up on the other. I had really come to depend on Fitts’s Law. So why not use that for the monitors too?
I use my monitors as distinct playgrounds: Code and whatnot on one and the client’s big-assed program on the other. Lightroom’s Develop pane on one, and the Library grid on the other. Photoshop’s editing area on one, palettes on the other. I never have one big window that straddles both screens. Hence, my arrangement, seen above, connects the monitors at one corner.
This gives me my sides as big Fitts’s Law targets, as shown in the cute kitty picture. I can slam the mouse to the side to get to the tools. The menu bar at the top remains a nice big target. If I want to go to the other monitor, I throw the mouse to the bottom-left corner.
This makes the mouse enter the second screen at the top-right, and I keep my Photoshop palettes and Nik plugins panel up near that corner for easy access. I twiddle what I want, then throw the mouse into the upper-right corner to get to the main screen. If I lose my mouse, I can just keep mousing up and to the right until I see it on the main screen.
Why not put the other screen to the right? I keep my Dock hidden on the right. With today’s wide-screen displays, horizontal real estate is cheap, vertical real estate is still precious (six more lines of code! woo!). Having the Dock Fitts-style on the right makes it very easy to access.
Why not off the bottom? I use the hard border of the screen when resizing windows large – grab the corner, resize larger quickly until hitting the bottom of the screen. Unfortunately the green jellybean rarely does what I want it to do.
I’m not saying this is the best way for everybody, but it works very well for me. If you get frustrated with your multiple-monitor setup by accidentally mousing into the other screen, give the corner-connection a try.
July 1, 2008
May 15, 2008
In case folks might have missed it : Launchd: One Program to Rule Them All Tech Talk @ Google, with Dave Zarzycki, the launchd dude.
March 26, 2008
During a discussion on Cocoa memory management on Cocoa-Dev, Bill Cheeseman posted the patent numbers for the autorelease mechanism. Here are some links, all called “Transparent local and distributed memory management” :
5,687,370, 6,026,415, and 6,304,884.
February 3, 2008
Wonder if a signed integer value and a sanity check for
<= 0 would have caught this case? (Mail.app on Leopard. BTW. Anyone know how to get Mail.app on Leopard to delete messages without moving them to the trash? command-delete moves to the trash, and doing a “Cut” takes a freakishly long amount of time.)
January 16, 2008
It’s Macworld 2008 time. I had Google Booth Bunny duty on tuesday. When I wasn’t working the booth, I wandered around the show floor with a camera, with the obligatory web gallery.
October 29, 2007
Apple provides downloads for OS X version for developers with seed keys. That’s cool.
Apple’s webservers cut off long downloads after twelve hours. When it takes twelve hours and fourteen minutes to download something, that’s not cool. I don’t want to think how many multigigabytes I downloaded before figuring out that twelve-hour cutoff.
Jonathan Wight said on IRC one day, “dude, use Speed Download“. I did. It works. I watched the progress meter at 12 hours. download speed went to zero as expected. A couple of seconds later it cranked back up Fourteen minutes later I had a finished download. That’s cool.
October 26, 2007
Chris Hanson and Scott Stevenson are organizing NSCoder Night in the Silicon Valley, a weekly event where Cocoa geeks can hang out for coding and mayhem at a coffee shop or a pub. It sounds like it’ll be a huge amount of fun.
For folks in the Pittsburgh area, Jeff Hunter is organizing DevHouse Pittsburgh thursday the 8th. It’s similar to NSCoder and SuperHappyDevHouse, but in the pittsburgh area. Some of the local CocoaHeads will be there.
September 29, 2007
VoodooPad is a personal Wiki that lets you write stuff and link things around. When it sees words in CamelCapsStyle, it automatically turns it into a link to a new page where you can write more stuff. All pretty standard wiki stuff. It uses the OS X text engine so it has all of the standard word processing features you’ve come to expect, including stuff like tables and lists. This is especially nice because I get a lot of my emacs key bindings along for free. Muscle memory is a wonderful, wonderful thing.
There are a couple of things I love about VoodooPad : it gets out of my way. I type, I link, I paste in graphics. Gus has obviously paid a lot of attention to the fine details and the app just gets out of my way. It is also very stable. I don’t remember when my last crash was. It just works. Just about every Leopard update makes SnapZ pro freak out and I have to get get new license keys (and then usually something fails on the server side, and takes 20 minutes of dinking around to get a license). Omnigrackle‘s layers get confused and the PDF export dialog has a bug that makes it easy to corrupt your document. Mail.app regularly crashes. But VoodooPad just keeps on chugging along. Oh, and it’s fast, too.
Some folks I know put everything into a single VoodooPad document and use it to store their life (or at least their brains). I typically have one VoodooPad document per project, which usually fall int into one of three broad categories:
Design Document : I have one where I keep my design notes for the Extreme Cross Stitch Design software I’m working on for the Spousal Overunit. I dump figures from OmniGrackle in there, and use class names as the currency of links. This makes it very easy to capture my thinking about the specifics of individual classes, as well as highlighting the interactions between classes. Sometimes I can go for a month or two between working on the App, so having all this stuff handy and interlinked makes it easy to reload my mental state. My winning IronCoder entry included a VoodooPad design document with all sorts of notes. (The entry was Race Against Time, if you’re really bored)
Data Dump and Organizer : I have another one where I keep all my notes, to-dos, transcriptions, and copies of interesting emails for the next edition of AMOSXP. You can see a screenie for this is over to the right. I blort in anything and everything I think might be interesting for the next edition. As I start chewing up a chapter, I have ready access to stuff to consider for exclusion (and stuff to nuke). Sometimes one topic (say an extension to a favorite object-oriented language) is too big for one chapter, so something like its automated memory management technique would make better sense living in the Memory chapter. So I can easily make a note in MemoryChap to say “go look over InThisSetOfNotes for these aspects of rubbish aggregation that would be interesting to talk about here”
Debugging Aid : For projects where I tend to do more debugging than design, I have a VoodooPad document that keeps my debugging notes. Usually for each non-trivial problem there’s a page with a dialog with myself. “So What’s the Problem Fool? Oh, Google Kipple is crashing when you frobulate the giznat. Does it happen all the time? No, just on the second launch. Maybe it’s the SpicyPonyHead user preference Hrm, could be” The dialog format lets me focus my thoughts by making explicit what the next useful piece of information might be, and it makes for easier reading when I need to revisit a bug or if I have to put it down for awhile and return to it later. The linky nature of a wiki makes it easy to put in different branches of investigation and let me revisit what I originally thought was a dead end, but might actually be the path to figuring out the real problem. Because Cocoa class names are CamelCapStyle, class names in stack trace become links. Ppaste in a stack trace and then link out to a class to jot down some relevant notes.
I’m a fan. Check it out.
September 27, 2007
Steve Yegge knows hiring. His latest epistle is Ten Tips for a (Slightly) Less Awful Resume.
September 3, 2007
As part of my day-to-day work, I need to test my software with both admin accounts and non-admin accounts. Fast User Switching (FUS) is nice, for awhile, but it really started to get on my nerves. For one, I’d miss out on real-time entertainment from the company IRC channels. Then there’s the entering of passwords, and waiting for the Big Spinny Cube Effect. These are annoying, but livable.
The big thing is that in some wireless / VPN situations, the net connection would be available in one user but not the other; or the FUS act would disconnect me from the network or the VPN. And since I’m hitting internal resources for testing, this was totally Not Good.
BarryJ, (via my pal Mr. Machine Tool), pointed out that when you switch users with FUS, a window server and a pasteboard server are left running. This means that you can VNC from one user to another already logged-in user on the same machine.
It’s not perfect (the switched user might not be able to log to the console), and sometimes things lock up if I leave it running overnight, but it’s good enough for me to do my work with multiple user accounts and not have to actually FUS between them.
For the server, the changes from the default settings are setting a password, only allowing local connections (which requires SSH to be running). On the client, the host is
localhost. So then FUS to the test user and crank up the server. FUS back to the primary account, and then start the VNC client.
August 30, 2007
I have a love/hate relationship with the MacBook keyboard. I really like the feel, but when I come to the MacBook (personal machine) keyboard after using the MacBookPro (work machine) keyboard (which I like even better), I have a hard time adjusting. In short, my typing is utter and total crap because I hit the corner of keys, which register a keypress by feel, but it doesn’t actually engage the key switch. This makes me sad.
On a whim, I picked up the Mavis Beacon typing tutor thing at the apple store (after going there to caress an iPhone in person). It’s an old, cheesy program, and some of the “games” are embarrassingly awful, but after doing a couple of basic typing exercise, I have the MacBook keyboard “feel” back again, and my typing accuracy improves a whole bunch. Which makes me happy.
August 24, 2007
I promise to keep the non-programming content to a minimum (I’m sure many folks tune right out when camera nerds start talking). But I figured I’d point folks to my photo album: http://picasaweb.google.com/borkwareLLC.
August 22, 2007
Sometimes when I’m building a bug report for somebody, I’ll make a movie of the application misbehaving. A picture is a thousand words, so a movie is at least 15,000 words a second.
For instance, I saw some strange behavior in Mars Edit’s preview panel with a <pre> block. I could spend a chunk of time describing the behavior, and even then would probably omit some important piece of evidence. Instead, I sent This Movie on to the Red Sweater Empire. DCJ asked me to try some stuff, we narrowed it down to line break conversion, and I found a work-around by putting in a space character in the <pre> block.
Similarly, we have an internal tool at the GOOG which I had difficulty getting working. I made a quick movie of it not working, sent it on to the developer, and he said “huh. you’re doing everything right. Lemme go look for the problem.” Sure saved a lot of back and forth, and also saved a lot of wondering “Is this a PEBKAC problem?” (I’m the first to admit I do stupid stuff.)
Naturally, include a link to the movie in any bug report. Sending someone ten megabytes in an email without prior warning could be interpreted as rude.
There are a number of screenShotMovieMakers. I’ve used SnapzPro for years, and have certainly gotten my money’s worth.