The importance of a good (and well-targeted) headline

August 3rd, 2013

I use as a stand-in for an RSS feed of my twitter stream – my basic “Daily” is just all the links from all the people I follow, and I skim it once or twice a day, typically (it’s at!all, if you care to look).

Today as I was skimming I saw a bunch of links about Yahoo! buying RockMelt. Since I haven’t really cared about RockMelt since I stopped experimenting with it a month or so after it launched (social media’s just really not all that central to me, so RockMelt’s utility – for me – was pretty low) I basically noted the fact and moved on.

Then I saw this headline pop up:

Yahoo Has Acquired Rockmelt, Apps To Shut Down On August 31st

I’m not a big fan of TechCrunch – haven’t been for a while. It’s probably a good source of information for a lot of folks, but I don’t tend to enjoy their coverage, so their links go mostly unclicked by me and I find out about whatever they’re covering somewhere else. This link, however, caused me to click through even though I had skipped plenty of other links on the same subject, even ones from Kara Swisher, whose work I usually enjoy a great deal more.


Because the TechCrunch headline targeted my interests better. It was quick and to the point, illustrated what might be interesting to me in the RockMelt story. I don’t care how much Yahoo! paid. In fact, I don’t care much about Yahoo! at all (although I admire Mayer for what she’s trying to do there) so Swisher’s headline had no interest to me. However I do think a lot about what our adoption of a given business’s solutions means to us when that business is acquired, so TechCrunch’s headline generated interest that Swisher’s did not.

Not a revolutionary concept, I know, but (to me) an interesting example of the importance of targeting the right audience.

Parade’s Chelsea Clinton story intro if it was about Bill

April 8th, 2013

One of the ways to tell if you’re writing a story about a news-worthy female in a way that doesn’t display hidden sexism is to replace all mentions of the woman in question with references to a man and see if the story still sounds professional and appropriate. Parade Magazine’s feature on Chelsea Clinton fails this test. Let’s see what it would sound like if the same lead were written about Bill:

The former President is stepping into the public eye and embracing the family legacy—in fact, with the help of an army of talented young doers, he’s ready to change the world.

“Hi,” he says, striding into the room with a smooth gait and a low, sure voice. “I’m Bill!” The handshake is confident, the eyes firmly fixed. With the ease of his father and the directness of his mother, Bill Clinton is stepping out into the world.

At 33, he wears his political royalty in triplicate: There are his famous parents, of course, but also his mother-in-law, former Pennsylvania congresswoman Marjorie Margolies. After several years in the private sector (with McKinsey & Company, then with a hedge fund), Clinton has emerged onto the civic stage in his own right, graceful and glowing and, today at least, in brilliant magenta and lime green.

“My grandmother always wanted me to wear more color,” he says. “She was right.” His whole life, Bill has looked up to Dorothy Rodham, who died in November 2011, and he tries to wear something of his grandmother’s daily. At the moment it’s a clear bangle bracelet.

Rodham was actually the person who most encouraged Bill to turn his years of self-imposed privacy into a more public life. Bill is now helping to run CGI U, an annual meeting for college students held through the Clinton Global Initiative, which his father launched in 2005 to develop innovative solutions to challenges around the world. The CGI U sessions, like the one taking place this weekend at Washington University in St. Louis, require attendees to make a Commitment to Action—a concrete plan to tackle a local or global problem. And the conference itself emphasizes practicalities and logistics, with speakers (from comedian Stephen Colbert to Twitter cofounder Jack Dorsey) and workshops that explain how to get projects done.

On this day, at CGI’s midtown Manhattan offices, Clinton presides over a sharp, focused meeting to pick which projects to spotlight onstage at the St. Louis event. One will plant trees with the money saved by using electronic rather than paper receipts at campus bars and shops; another proposes a low-cost mat that helps diagnose postpartum hemorrhages in women.

Clinton’s concentration does not waver. He demonstrates a masterly command of the issues and swiftly zeroes in on crucial questions. Statistics roll comfortably off his tongue; praise comes as quickly as critical suggestions. Wonky words like metrics and cohort fit naturally into his carefully constructed sentences. When the meeting ends, he sits down for a conversation about how he got here, starting with the challenge of growing up in the public eye. In New York, he says, people stop him every day.

See how that works? Would you ever write in a story about an up-and-coming male that he was “stepping out into the world…in brilliant magenta and lime green”?

Parade should be embarrassed of itself for writing this story this way, and should apologize to Ms. Clinton and women everywhere.

Subscribe to a feed on NewsBlur

March 29th, 2013

Bookmarklet to subscribe to a feed in NewsBlur from the feed’s page. Must be logged in to NewsBlur before using.

Subscribe on NewsBlur

“Source” in a gist.

I might take the time to make it a little more functional; take the first feed on the page if it’s not a feed, perhaps.

Or I might not.

Recovering from a botched git rebase

November 12th, 2012

In my two-phase source control architecture (in short, git repos for local source control, and a shared Perforce repo for team-accessible source control) I’ve had many (many, many) very, very bad experiences with git rebase. Here’s what typically happens:

I’m happily working away on my topic branch and get pretty far behind the Perforce HEAD. Then there comes a bug in one of the Perforce branch tips that I need to fix, so I flip back over to master, sync up Perforce and fix my bug. I flip back to my topic branch and then rebase on the new master state and KABOOM! I frell up some of the rebase merges and end up with lost work.

When those changes have been larger than what I want to recreate from memory (which often happens when I’ve been working in another topic branch in between syncing up my master branch and rebases on the topic branch in question) what I’ve done in the past is to use Git Extensions’s Recover Lost Objects mechanism and then parsed the underlying unreachable blobs to find the crap that I lost. *Poof* there goes a couple of hours.

Working through that process (again) this morning, it failed me for the first time (in retrospect what happened was that the tip I was looking for was further down the log, and I though the tips I was seeing were more date-ordered than they apparently are), so I went hunting for a different mechanism to get where I wanted to get, and here’s what I came up with. Instead of looking at lost objects, I went to the head for the branch I need to recover and look for the git commit id immediately before the rebase that screwed stuff up. Check that out to new branch, inspect it, and if looks good then merge it into the branch you need to fix up, or manually bring the files back over.

Obviously this is not a great solution if you’re working on a shared git repo, because you’ve got a good chance of leaving a confusing set of commits for your co-conspirators, although if you’re using topic branches “properly” that branch will end up on top of master and won’t screw with any history.

On a side note, one other thing I’m likely to do in the furture is make a clean branch of my at-risk topic branch *before* rebasing, so I have a clean branch if sdomething gets lost. That in itself will save me hours, I think.

Stunned by my own brilliance

August 30th, 2012
List<RoleDisplay> assignedRoles = presentationPermissions.AssignedRoles;
presentationPermissions.AssignedRoles = assignedRoles;

WTF was I thinking?

Fitbit/MyFitnessPal integration

May 2nd, 2012

I’ve been puzzling about how the Fitbit ( to MyFitnessPal ( integration works and how Fitbit determines how many calories to send to MyFitnessPal as a daily adjustment/exercise entry; enough so that I didn’t really feel like I could trust it as a mechanism for tracking my caloric levels, which is the whole reason I got the Fitbit.

Fitbit calculates your calories burned and then subtracts your calorie deficit (750/day for my current goals) to come up with your daily calorie total, which will differ day by day depending on your activity level.

MyFitnessPal calculates your daily caloric level based on your goals (which equates to a calorie deficit) and your height, weight, and age.

Here’s where I did a little guess work, though:

(I think) Fitbit then takes your current caloric burn rate and presumably estimates what it thinks your total daily caloric burn will be to come up with an estimated end-of-day calorie amount (based on the calculation above). It then applies an adjustment to MyFitnessPal that when added to your MyFitnessPal daily calorie allowance will equal the Fitbit Total Calories Burned – Calorie Deficit amount. This seems reasonable, because if I have a highly active morning and then a less-active later day, the rate at which the Fitbit adjustment increases will change, and over the week or so that I’ve had Fitbit and MyFitnessPal hooked up I’ve actually seen the Fitbit adjustment go down as the day progresses, so presumably if Fitbit misestimates your end-of-day caloric burn based on early activity it has to compensate later in the day as what it thinks your end of day burn will be.

So yesterday (2012-05-01), Fitbit told me I burned 2863 calories. subtracting my 750 calorie deficit that left me with a 2113 calorie allowance.  MyFitnessPal indicated that my goal with my specs should be 1660 calories, so:

2113 – 1660 = 453 calorie adjustment from my activity level yesterday.

Fitbit actually added 452 calories worth of exercise to y MyFitnessPal profile, so pretty much right on. Part of what threw me was that I had altered my calorie goals in MyFitnessPal because I didn’t trust the way the adjustment mechanism was working so things didn’t quite add up.

Here’s a table of my results over a few days (so I can track the accuracy of my guesses over time):

Fitbit to MyFitnessPal calorie adjustments
 Date Fitbit Calorie Allowance MyFitnessPal Expectation  Difference Fitbit Adjustment
 2012-05-01  2113  1660  453  452
 2012-05-02  3240  1660  1590  1589
2012-05-03 2414 1650 764 765

Updated (2012-05-03)

added a table to track the results over a couple of days t see if my guesses are valid

Telerik JustDecompile on 64-bit machines

January 12th, 2012

I spent some time yesterday digging into the internals of the .Net 4 System.Web.Profile namespace. At first, Telerik’s trusty JustDecompile beta seemed unable to help me out; all I could get was the public structure of the classes and none of the implementation. I couldn’t understand what was going on and why JustDecompile couldn’t show me the implementation details or why JD couldn’t even show me the existence of private members. This wasn’t a problem across the board – while digging through the method chains I stumbled across some assemblies where JD was able to show me the full monty, so I figured this was limited to portions of the System.Web.Profile namespace. I eventually ended up grabbing RedGate’s early access build of Reflector 7.5 so I could take a peek at the framework code (my normal Reflector trial expired a while back).

I assumed that the problem was that JustDecompile wasn’t properly decompiling some of these assemblies; bugs of that nature have been filed previously, so I went to Telerik’s public issue tracker to file a bug about this assembly, and in the process was poking around to see if JD was having trouble with the whole System.Web.Profile namespace or just portions of it.  While trying to find the limits of the places where JD was having trouble I happened to look at the Assembly Properties in the lower left of the JustDecompile screen and noticed that the Platform Architecture for the assembly I was looking at was x86, not x64, while the processor of the machine I was working on was x64.

Once I opened the proper version of the assembly, JustDecompile was able to show me the implementation details and both private and public members for the classes I was interested in.

Moral of the story? If you’re running JustDecompile on a 64-bit machine, make sure you manually load the 64-bit assemblies (usually found in %SystemRoot%\Microsoft.Net\Framework64). Don’t trust the JustDecompile LoadFramework dialog to do this for you – that will (as of Beta 2012.1.106.0) load the 32-bit assemblies.

Disclaimer: I am a Telerik MVP primarily for JustCode and JustDecompile.

Running WordPress commands in the shell

January 11th, 2012

Every so often I quickly need to check the output of a particular WordPress function. You can do this by inserting some logging calls into the WordPress file you’re looking at, calling a page that executes the function you’re looking for, and then perusing your logs, and sometimes that’s really what’s needed.  But often you don’t need anything nearly so effort-intensive.

Case in point, in tracking down a problem with the MailPress plugin today wherein email addresses that have a “.” in the pre-“@” part (the local-part, in RFC2822) were being rejected by MailPress, one of the MailPress forum participants claimed that the issue was being caused by the fact that the WordPress function is_email() returns false for addresses where the local-part contains a “.”.  I did not believe that to be true and wanted to see the actual output of the code without having to jump through al of those hoops.  Luckily there’s a wquick and dirty way to do this leveraging the wp-load file in your WordPress root and the interactive PHP shell.

To get into the interactive PHP shell type

php –a

at your shell and you should be dropped into the PHP interactive mode, where you can type php code and have it executed as it is typed. Usually you shell prompt will then look like this


Once there you need to import the WordPress architecture, and using the wp-load functionality that’s really easy. Just execute


and your PHP environment will now have access to the whole WordPress code-base. From there I could type


and receive


in return, so is_email() is catching obviously bad emails. Executing



string(28) “”

showing that is_email() correctly recognizes that email.

Things to keep in mind:

  1. You won’t have the full WordPress context that’s generated by executing the PHP script in the browser – you’ll have no current_user, no Loop, and no other WordPress context unless you create it yourself.
  2. fatal php errors will kick you all the way out of the PHP shell, losing your carefully constructed context, so this mechanism probably isn’t suited to cases where you have to set up a big context to look at what you want to look at. This is suited for isolated functions that rely on nothing but the parameters passed in, but not for much that’s more complicated than that.

Testing Siri

December 30th, 2011

I’m testing using Siri to blog by voice using the WordPress iPhone app

It’s pretty amazing how well she can determine what I’m trying say as long as I speak clearly. I’m in my car driving my iPhone just sitting on my lap and I’m just talking to pretty much at a regular voice and she seems to catch most of the words.

It’s a little inconvenient to have to touch the microphone button on the iPhone screen while driving but not having to use a Bluetooth microphone or anything like that is pretty amazing.

When I became eligible for iPhone upgrade recently I was planning on just one with the iPhone 4 (I had a 3GS); it was cheaper and I figured it did pretty much everything I wanted to do. Then a coworker of mine told me it he’d just gotten a 4S and he was amazed by how well Siri was able to recognize voice commands and things like that so I figured I’d spring for the 4S.

Now I’d just like to see Apple do the same thing for handwriting recognition that they’ve done for voice recognition on the iOS devices especially the iPad.

GimmeAGUID bookmarklet

May 27th, 2011

While I’m in the bookmarklet mode, here’s another one.

While writing yesterday’s bookmarklet, I needed a string that was sufficiently unlikely to by the selected text on the page that I could comfortably rely on it to be unique (I’m using it as part of the hack that determines if the browser is IE, since IE’s handling of functions as variables is a little wonky, see line 1 and line 11 of the gist). The first thing that came to mind was a guid, so I googled for an online resource to generate a guid. The first hit on te results page is this gist, which when executed in Firebug’s console got me the guid I needed for the bookmarklet I was writing.

Later in the day it occurred to me that this was another good candidate for a bookmarklet itself (not that I need guids all that often, but these are a ton of fun to write), so here it is:


Click the bookmarklet and a prompt pops up with the guid in the entry field, preselected for a quick copy. I initially did this as an alert, but that didn’t give you any way to copy the guid. I also tried it in a new window, which worked fine except that the you have this extraneous window hanging about, and you still have to select the guid to copy it.

Expanded code in my fork of the aforementioned gist.

URL Unescape bookmarklet

May 26th, 2011

Recently in troubleshooting bad links and dangerous querystring params on the site I work on for my day job, I’ve repeatedly had to unescape url encoded strings.  I can never remember what the codes mean, so I inevitably google for url escaping and trying to piece together what I’m looking at.

After a few rounds of this I decided that automating this task would be a fun little javascript exercise and would save me time (eventually). Enter my first bookmarklet, the



It looks for selected text in the window and unescapes that. If there isn’t any selected it prompts you for something to unescape. Tested in Safari/Windows, Chrome, and Firefox 4. Also sort of works in IE, but I had to add some hackery around the selection for IE, and if IE security settings prevent the prompt() call without approval the first click of the bookmarklet returns null. I may be missing something in the javascript, so suggestions for improvement are welcome, especially around the IE handling.  An expanded version of the bookmarklet can be found in this gist.



I found the problem with my javascript that necessitated the hacky handling in IE, and much to my surprize (not) IE was actually sort of doing the right thing.

If you look at the edit history of the gist you can see what I was doing was essentially alerting the unescaped version of the anonymous function I had created. Firefox and the other browsers decided that I really meant to use the results of the function and not the function itself and helpfully evaled the function and used the returned value. IE took me at my word and altered the function object assigned to the sel variable. Once I corrected the code to make sure that the was calling the funciton and using its returned value, all the hackery I had to do to get IE to work went away.

Updated bookmarklet code above.

Oh, and it also occurs to me that this wasn’t my first bookmarklet – I created a day of the year bookmarklet a while back that I’ll post here sometime.

Adding CustomDocumentProperties to Office Documents

May 13th, 2011

In working through adding custom document properties to Office docs I ran in to several problems that took me a while (and a hint from this question at to figure out.  I posted a short version of my solution there, but wanted to document the full process here.

The first problem I ran into was dealing with the dynamic nature of Office code. Writing what seems to be the obvious initializers doesn’t work as one would expect. The following code won’t even compile:

Office.DocumentProperties custProps = this.CustomDocumentProperties;

A cast is required here:

var custProps = (Office.DocumentProperties)this.CustomDocumentProperties;

After that I tried the obvious thing (this code is within the ThisWorkbook_Startup method, so “this” refers to an Excel document, in this case):

var custProps = (Office.DocumentProperties)this.CustomDocumentProperties; custProps.Add( "AProperty", false, AStringPropertyVariable.GetType() , AStringPropertyVariable );

Essentially, I’m saying “Call the Add() method and provide the System.Type of the System.String Type for the Type parameter”.  This triggers a COMException: Type mismatch.

Looking at the Microsoft Support article linked to in the previous answers to this question, it became apparent that the Type parameter of the Add() method requires an Office-specific type, so I replaced the 2nd line above with:

custProps.Add( "AProperty", false, MsoDocProperties.msoPropertyTypeString, AStringProperty );

and setting the custom property worked great. Because it’s a CustomDocumentProperty Office will add the custom type without difficulty, but I only want to add it if it doesn’t already exist but when the item of the CustomDocumentProperties collection doesn’t exist there’s no way to know without catching a System.ArgumentException, so the final, full solution for me ends up as:

Office.DocumentProperties custProps = (Office.DocumentProperties)this.CustomDocumentProperties;
try {
    string aPropValue = custProps["AProperty"].Value;
} catch ( System.ArgumentException ex ) {
     custProps.Add( "AProperty", false, MsoDocProperties.msoPropertyTypeString, AStringVariable );

Interestingly enough, while looking for a little additional background on the var keyword in relationship to this post I stumbled on Scott Hanselman’s post (strangely enough also related to DocumentProperties in Word and the unsuitability of C# 3.0 for that purpose). In it he points to a different MSDN article (the VS 2010 version of that article is essentially the same, with the exception of the note regarding host controls – not relevant in the more up-t-date Office and VS versions) that he says doesn’t work properly, and while I didn’t actually run the code from that article it’s using pretty much the same concepts that are working for me here. I would be curious to know which part wasn’t working – perhaps it didn’t work very well in Office 2003 or 2008 (I’m using Office and VS 2010 and C# 4).

Viewing an Excel workbook from an Office 2010 project in Visual Studio

May 11th, 2011

Another post in the vein of “let my blog be my memory”

When working on an Excel workbook in an Office 2010 project I always forget how to open the designer view of the workbook so I can look at the actual worksheets.

Turns out you have to right-click a specific sheet and choose “View Designer” and from there navigate amongst the sheets to the one you want. Doing that to ThisWorkbook.cs won’t show you the actual worksheets.

Twitpic’s Terms of Service faroofaraw

May 11th, 2011

Quite a day today for the Twitpic folks.

They make an update to their terms of service and get the royal slapdown from the Twitterati in what appears to me to be a completely unwarranted reaction.

Thankfully (for them at least) they responded fairly quickly to allay user fears. I don’t use the service much, so don’t really have a horse in this race, but here are my thoughts, for whatever they’re worth:

First, Ian Visits has a Very Useful Post™ that outlines the changes.  Indeed the essence of my response is left in a comment on his post, but I thought “Hey, I’ve got a blog. Why don’t I put my comment in my own space?  Now, where did I put that thing?”

Anyway, if you want to see the diffs between the original ToS and the two newest versions, look at Ian’s post.  However, for my mind here’s the money bit:

You may not grant permission to … to retrieve from Twitpic [emphasis mine] for distribution, license, or any other use, content you have uploaded to Twitpic.

See what they did there?

They said, more or less, “You don’t have the right to sell or give permission to any media organization to get your content FROM OUR SITE.” In other words, we don’t want to pay bandwidth charges to host your media play.  We’ll keep it up here, for free, for use in a social service we’re offering, but don’t try to make money on our bandwidth bills.

Maybe that’s too much, and if things hadn’t proceeded as they did and had Twitpic not retracted that portion of the ToS perhaps the market would have weighed in (I rather doubt it, though; generally Terms of Service issues aren’t important enough to most people to have a market-sized effect). Certainly it wasn’t well or clearly worded. From my perspective, though, it seemed like a perfectly reasonable expectation. Nowhere did they say “You can’t sell images you post to our site,” or any of the other nonsense people seemed ready to attribute to them – all along they said “All content uploaded to Twitpic is copyright the respective owners” or words to that effect, and even if the Twitpic-hosted image is the only copy of the photo you posted that you have it’s not like you couldn’t download a copy, host it yourself, and sell it to your heart’s content.

Typically when I get this far into a rant that seems to put me at odds with a lot of other folks I find myself thinking “There are a million-bajillion people out there who are smarter than me – why don’t they see it this way?” so what am I missing?

On a side note, a couple of weeks ago I toyed with the idea of keeping the text of every license I had agreed to in a git repo and updating them as they were updated – making for very easy diffs amongst different versions. That swiftly turned out to be a bigger pain in the ass than I anticipated and swiftly got thrown off the brain-train, but perhaps that was premature….

Puppy teeth encounter my new laptop power supply. My packrat tendencies serve me in good stead, I have one laying around

April 16th, 2011

(sent to via email)

A miracle of life.

April 13th, 2011

(sent to via email)

Woman piling your cart with a dozen bottles of water I AM JUDGING YOU! Oh, wait.

April 12th, 2011

(sent to via email)

A good morning’s work – this is more than 20% turnout just for the morning shift.

April 5th, 2011

(sent to via email)

Apparently I’m having a very “leet” calorie tracking day.

April 4th, 2011

(sent to via email)

“The remote certificate is invalid according to the validation procedure” error, intermediate certificates, proxy servers, and WinHTTP

March 22nd, 2011

We recently upgraded our SSL certificates on our web servers – we use GeoTrust, who recently moved to the intermediate root certificate model (well, recently meaning July, 2010) and this is our first renewal since the change.

This should have been a completely transparent change, and all in all it came pretty close.  Where it didn’t go so smoothly was where SharePoint web services is concerned.

We run a few mechanisms in our stack where a process on a server relies on a connection to the SharePoint Lists web service; some are web server-based and some are running from within SQL Server. Our network architecture also puts all of our infrastructure behind a proxy, and users are granted access to the internet by login rules that set the Internet Explorer proxy appropriately.

When we installed the new set of certificates (including one for our SharePoint web servers) everything went well for the users.  They connected to SharePoint and downloaded the intermediate certificates without any difficulty.

On the servers, though, it was another matter entirely. Even though the user under which context we were executing the web service calls was granted access to the internet and had previously had it’s IE proxy settings properly set the attempts to negotiate an SSL connection were failing with the error message:

    The remote certificate is invalid according to the validation procedure

As it turns out, the servers were attempting to download the intermediate certificates using WinHTTP, which maintains a separate proxy configuration that needed to be set in order to allow the servers to access the intermediate certs. Another short-term solution was temporarily to open traffic from each server to the internet without filtering through the proxy, which would allow the server to download the intermediate certificate, but in the long term adding the WinHTTP proxy settings is a better solution.


In the end, our Infrastructure team was of two minds about this issue, and since the WinHTTP solution required a registry edit the simpler of the two solutions was attempted. While the service account that SQL Server is running under had been granted internet access previously, its current proxy settings in IE were out of date. We logged into the server as that user. updated the IE proxy settings, and re-ran the SQL job, and all worked as expected. On a previous occasion we simply opened up the problem machine to the internet for long enough for it to download the intermediate certificates, but this seems like a simpler and slightly more secure solution

Git template directory on Windows

January 16th, 2011

I just spent entirely too much time trying to find the git-init template directory on my Windows machine (so I could copy my default hooks from one machine to another).

In order to save this hard-won information to non-volatile memory (i.e. the Internets), here it is (at least for windows 7):

C:\Program Files\Git\share\git-core\templates – on a 32-bit OS

C:\Program Files (x86)\Git\share\git-core\templates\hooks – on a 64-bit OS

Is this a new Google search feature?

November 12th, 2010


… or did I just miss it?

Muchkin: Cthulu, apparently about farming.

September 21st, 2010
Overheard this weekend after a round of Muchkin (our first!):

Son: “There are a whole bunch of different Munchkin games. Munchkin: Clerical Errata, Munchkin: The Need for Steed, Munchkin: Cthulu, which I guess is about farming…”

Wife and me: LOL…thud…ROFL…. “What makes you say that?!”

Son: “Well, there’s a cow on here.”

As it turns out the marketing images on the sides of the Munchkin game include a picture of Cowthulu. From the perspective of someone who knows nothing of H.P. Lovecraft, I can see where one might be…confused.

(sent to via email)

Visual Studio/Office 2010 development and programmatic access to the VBA Project System

September 20th, 2010

While attempting to spin up some VS/Office 2010 Excel development today I kept running into this following error message:

“Programmatic access to the Microsoft Office Visual Basic for Applications project system could not be enabled.”

I was trying to create Office Development projects by opening existing workbooks (about to embark on some very exciting and long-overdue updates to an 8 year old Office VBA project) and in some cases the project was created but the error kept recurring every time I opened the project in VS2010.  In other cases the project creation wizard aborted without creating the project.

This is one of those errors that I dread trying to uncover help for on the web – it seems so specific to my exact situation that I’m unlikely to find help, but in this case I was pleasantly surprised – googling the exact error message retuned a couple of valid-looking hits:

The second points to the first as its source, so really only one actual result, but it looked valid.  Awesome! My problem is about to be solved!

Only not so much.  Turns out that the setting under discussion( “Trust access to the VBA project object model” in Office 2010) was already set to the correct value in Office 2010. Apparently no help here.

So I fiddled around a little more, tried restarting to see if perhaps some ghost of an office app still had a hold on the VBA project model (the error specifically says that if you have Word or Excel open that can block programmatic access).  No joy.

So I finally went back to the posts in question and re-read them. They seem perfectly relevant and specific to my exact error.  The only difference being that they’re focused on Office 2007 and not Office 2010….


I have both Office 2007 and Office 2010 installed. The relevant configuration point was set in 2010, but not in 2007.  I set it in 2007 and everything is hunky-dory.

Posting as a public service, to enhance the memory of the web, in case someone else has this problem (or in case I have it again and forget what I did to resolve it).

(sent to via email)

Facebook credits at Target. Hmmm.

September 4th, 2010

(sent to via email)

IronPython error messaging for instance methods

November 3rd, 2009

Recently I’ve been using IronPython as a dynamic testing framework for .Net assemblies more and more. Particularly in the current development iteration for our company’s website, where a contractor is working on the data access layer classes for a database schema of my design. In order to test his data access methods I need to new up an instance of the various classes and check the output of the methods. We’re simultaneously starting to do some coded integration tests using NUnit for the same purpose, but when I just want to quickly smoke-check his work before sending him on to the next task, opening an IronPython console and instantiating his class and running through the methods can’t be beat for quickness. I’ve been meaning to write a little bit more about this, but suffice it to say it’s been working really well for me.

My typical usage pattern is to import the clr and add a reference to the dll in question by path and filename, and then to import just the class I want to test. So far, so good. But when testing a little refactoring I was doing this morning I encountered an error message that took me a few minutes to isolate and resolve.

The call I was using was

And the error message I saw was:

TypeError: LoadSimpleCategoriesByRegion() takes exactly 2 arguments (1 given)

I stared at the code for a while, swearing up and down that the method I was calling *did* have a 1 argument overload available. Finally I realized that I was calling an instance method in a static fashion – the other argument that IronPython was expecting was an instance of the class to execute the method on.

Not exactly rocket science, but a confusing (to me, at least) error message that took me longer than it should have to figure out.

NUnit and app.config in VS2005

November 2nd, 2009

I ran into a little problem yesterday in setting up unit tests for some new data access classes for my employer’s website.  I’m using NUnit 2.4.8 and integrating it with Visual Studio as outlined here, so when I’m ready to run the tests I select the external tool and NUnit spins up and loads the project I’m working on I also use a plugin that starts NUnit and attaches the Visual Studio debugger to the NUnit process, but I often don’t want to debug the tests, and the external tool is faster to start).  NUnit is configured with Visual Studio integration on and whenever I would run my tests they would fail because NUnit wouldn’t use the appropriate configuration settings.

There are plenty of people who have encountered this problem, and plenty of examples of versions of the “right” way to add the config file.  None of these were suitable to my exact situation – I have a “correctly” named config file in the directory with the DLL my tests are testing, and I also have a “correctly” named config file in the directory where my .nunit file resides.

I finally discovered the correct solution for my use case.  The difference lies in the fact that I’m running NUnit in Visual Studio integration mode, and loading NUnit with the project file of the Visual Studio project I’m testing.  Because NUnit is loading the tests directly from %projectroot%/MyProject.csproj my config file had to be named MyProject.csproj.config.  Once I made that change my tests started working fine.  They didn’t pass, but at least they were working J

Seems like a pretty simple answer, but I figured I’d post it here to add to the pool of possible solutions for someone in the future to run across (and like as not, that someone could be me).

links for 2008-04-03

April 3rd, 2008
  • Many of the buildings these architects produced were absolutely extraordinary – and, frankly, it seems impossible not to look at these images and judge 20th century Germany in light of the catastrophic stupidities that led to its murderous exile of the

links for 2008-03-30

March 30th, 2008

links for 2008-02-06

February 6th, 2008