First Post from Windows Live Writer

I still can’t get used to the immature text editing experience embedded in web browsers, even editors that push the limit like Google Docs.  Windows Live Writer claims to give you the desktop experience with full fidelity preview of how it will look on a blog.  Let’s give this a try!

Silverlight and IE Web Services Bug

UPDATE 2 — the workaround:

Using SSL with Silverlight and IE is tricky.  The Microsoft team is too busy to look hard at anything until after MIX (by the way very excited for Silverlight 3 guys!).  They don’t share public information about bugs like this which is understandable due to security concerns.  So I have my own solution, unfortunately without understanding root cause, but it works.

If you are calling web services over SSL from your silverlight app you will experience issues in IE6 and IE7.  Cache settings interact with the user’s own settings to prevent silverlight from receiving responses in many common configurations.  To prevent this completely in IE7 use “cache-control: private” in your response headers.

IE6 for some reason still throws an exception with this header, if the page is also https.  If the page, however, is http and the web services are still https, everything works.  I really cannot explain this but it works for now, and all the sensitive data is going over the web services layer.

My final solution:

  1. Send cache-control: private in the headers for all responses.
  2. Sniff for IE6 on the server side (this was a last resort) and redirect the page to http instead of https.
  3. Disable compression on the server to avoid the gzip bugs relating to IE which are discussed in the post below and originally kicked off these IE-Silverlight issues.

My only concern was using cache-control:private instead of preventing caching altogether for sensitive data.  Looking at IE’s local cache, it doesn’t seem to cache any of these responses to soap requests anyway, and the header prevents any storage in public or shared caches.

If this solution continues to hold up, I can wait for the real fix to arrive in Silverlight 3, which the team assures me is going to happen.

UPDATE (read below for original problem background):

After a lot more testing and digging, I’ve pinpointed this problem to an interaction with SSL.
In IE, you can go to Tools / Options / Advanced and check “Do not save encrypted files to disk”.  This clears up the problem for me.  From reading online, it seems that preventing caching with this option unchecked leads to a bug where IE does not save the file to disk due to the cache restrictions, but then cannot load the file because it expects it be on disk. Checking this apparently causes everything to happen in memory.  Many thanks to the forum post at for this tip.
Asking users to check this box is a big problem, however.  I saw one suggestion online about setting cache-control to private instead of none.  However, the original workaround to the gzip/caching bug required that we prevent caching. This leaves me in a pickle with probably no choice but to prevent gzip on all responses which is a bummer with large XML responses coming out of WCF, and also harder to control than you might think via a web host.
In summary, I think there are 2 IE bugs in play:
1. gzipped and cached responses cannot be read by IE
2. SSL responses with “Do not save encrypted files to disk” unchecked and no caching
These both looks very similar.  I still cannot understand why #2 started happening on multiple computers all of a sudden when no changes were made to the application.  No changes were made to the computers either except via windows update.
I’m not sure why Microsoft has not shared an official line on this bug (bugs?) including exactly what the problem is, what the workarounds are, etc.  The forum thread does not count.  There are bug reports on about similar issues with IE and Flash.  I’ve tried very hard to give them information (and to get it from them) to no avail other than “we know about it and will fix it in SL 3.”  I’m not sure what they know about because they won’t confirm the exact nature of the bugs they believe are being reported on a random forum thread.  These problems are a huge silverlight limitation and are causing me to wonder if I made the right choice in Silverlight for this project.


Silverlight has a serious problem.  I’m surprised we don’t hear more about this problem, and the fact that we don’t makes me worry that there aren’t that many developers actually deploying real silverlight apps that use web services.

Fiddler's view of a perfectly good HTTP response that Silverlight cannot handle.

Fiddler's view of a perfectly good HTTP response that Silverlight cannot handle.

Click to view the above image at full size.

There is a known bug that you can read all about on this forum thread:

The forum describes a problem, initially with IE6, where Silverlight 2 RTW cannot accept responses that are both gzipped and cached.  The solution?  Prevent caching.  This took weeks of difficult debugging and searching to figure out.  The workaround:  add pragma:no-cache and expires:-1 to all responses going back to silverlight and it works.  This affected my users on IE6 and IE7, and I’ve now confirmed it with IE8 too.  But the workaround was working, until this week.

On three separate computers, all of which have recently take the Silverlight update of 2/26/2009, as well as standard windows updates, this bug is now resurfacing.  No changes whatsoever have been made to the silverlight xap or to the WCF services — nothing on that server has changed.  Yet here we are again, and this time our known workaround is not working.

I worked for a couple hours with the good people at Mosso to see if any changes they have recently made could have affected me and I’m convinced that they have not.

To be clear for anyone that might have a similar problem or a solution:  Using IE7, Silverlight 2 RTW, and an SSL connection, I cannot receive responses from a WCF service.  I get this helpful error message:

System.Reflection.TargetInvocationException: An exception occurred during the operation, making the result invalid. Check InnerException for exception details. —> System.ServiceModel.CommunicationException: The remote server returned an error: NotFound —> System.Net.WebException: The remote server returned an error: NotFound —> System.Net.WebException: The remote server returned an error: NotFound

at System.Net.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
at System.Net.BrowserHttpWebRequest.<>c__DisplayClass5.<EndGetResponse>b__4(Object sendState)
at System.Net.AsyncHelper.<>c__DisplayClass2.<BeginOnUI>b__0(Object sendState)
— End of inner exception stack trace —
at System.Net.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state)
at System.Net.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)
— End of inner exception stack trace —
at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result)
at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
at System.ServiceModel.ClientBase`1.ChannelBase`1.EndInvoke(String methodName, Object[] args, IAsyncResult result)
at WebStaging.Register.UserServiceClient.UserServiceClientChannel.EndGetReportDirectoryXML(IAsyncResult result)
at WebStaging.Register.UserServiceClient.Register_UserService_EndGetReportDirectoryXML(IAsyncResult result)
at WebStaging.Register.UserServiceClient.OnEndGetReportDirectoryXML(IAsyncResult result)
at System.ServiceModel.ClientBase`1.OnAsyncCallCompleted(IAsyncResult result)
— End of inner exception stack trace —
at System.ComponentModel.AsyncCompletedEventArgs.RaiseExceptionI…

Of course, this works fine in FireFox, Chrome, etc.  The problem is very specific to IE.  It’s a known problem that Microsoft says they are addressing in Silverlight 3.  But something has happened, most likely with the recent Silverlight service update, to make this problem worse such that the known workaround no longer works.

In the screenshot above, you can see a perfectly valid request and response going over SOAP.  The service returns a single argument (XML document as a string).  Fiddler managed to decode the response no problem without applying any decompression, but Silverlight throws the error.

This problem plagued us for weeks until we figured it out and had to take the undesirable route of turning off all caching.  Now it’s haunting us again and at the moment all IE users to my site will see this error.   If anyone at Microsoft reads this, please help us out.


Synoptic pathology reporting changes everything

In one of his usual insightful pieces on lab software, Bruce Friedman writes about the oops button present in some major anatomic pathology (AP) systems. The feature provides a grace period after a pathology report is signed out before it is released for consumption by the referring physician. Friedman notes how this is not appropriate for clinical lab systems, which deal mainly in numbers, as opposed to AP systems, which deal mainly with narrative text:

Most clinical pathology results are numerical and various rules can be deployed such as autoverification or interval checking to catch errors… By way of contrast, the surgical pathologist frequently works alone, creating narrative reports not amenable to rule-checking or real-time quality control.

I would like to add that the shift toward structured, synoptic AP reports changes this. Unfortunately, most synoptic reporting modules inside traditional AP systems do little more than produce bullet-like text. A very few systems, including mTuitive’s xPert for Pathology, provide true structured, synoptic reporting. What’s the difference? A synoptic report presents discrete data points to the human eye, while a structured report is stored as discrete data and enables all the benefits of granular data — querying, analysis, quality control using database technologies, expert systems, and so forth.

Pathologists are some of the most patient, meticulous, and knowledgable people in the world. The density of information in a typical pathology report on a malignant tumor far outweighs most written communication in medicine or elsewhere. It is time for AP data to become first-class citizens in the modern world of data management. The way to do this is not to turn pathologists into data entry clerks, but to give them tools that enhance their already instant recall of voluminous knowledge, that give them a consistent method of communicating life-and-death factors to surgeons and oncologists, that help keep the generalists up to date with the specialists, that do allow rule-checking and sophisticated algorithms to prevent errors and find rare diagnoses, that via aggregation of structured data enable real-time epidemiology. The people and the job remain the same, but oh what you can do with the results! Synoptic reporting changes everything.

Merits of the outlining metaphor

Dave Winer and Scott Rosenberg disagree that an outlining metaphor forces you into hierarchical thinking. This discussion is entirely coincident, but relevant, to yesterday’s discussion between Jon Udell and Don Thomas about how the tree metaphor works great for users who think a certain way (which turns out to be a lot of people).

In many cases, structured hierarchies are hard to understand and navigate, like a massive file system or a deeply nested taxonomy. But in the case of lightweight tools like outliners that let you do the nesting and grouping, it is a structure you have created yourself. In my own experience with this metaphor (mTuitive Authoring Environment), I’ve found that it is best to resist nesting greater than three levels deep, unless absolutely necessary. Similar to the “three-tap rule” in Palm’s early days (nothing is more than three taps away from the home screen), I’ve used this as a guideline in working with subject matter experts who use our tool, and it keeps the tree under control.

The Health Tech Blog: An Expert System for patient triage

The Health Tech Blog nicely picked up and elaborated on Jon Udell’s screencast. More insight from someone with healthcare and expert systems experience.

Lightweight Authoring for Experts

Experts abound in healthcare. By definition, physicians themselves are all experts in their chosen field. And from these experts, super-experts and super-specialists emerge who become the authorities on very narrow yet important subjects. At mTuitive, it is our goal to make it easy for any subject matter expert to translate their knowledge into a working application that guides users through processes like data collection and decision-making.

Jon Udell produced an excellent screencast where he interviews Dr. Donald Thomas (Mentat Systems), an expert in running hospital emergency departments, on his use of the mTuitive Authoring Environment to create an xPert Application for ER Triage. This is a lengthy, thoughtful discussion on the use of lightweight tools to encode expert medical knowledge, and we all enjoyed it and appreciate Mr. Udell’s and Dr. Thomas’s take.

Jon and Don talk about how a tool needs to accomodate the way the user-author-developer thinks… In the case of our tool, folks who generally like outlining tools ease right into the tree metaphor. They talk about how a programming background is very helpful when getting started, but in our experience at mTuitive non-technical people have been quite successful creating applications as well. The simple, declarative method of creating dynamic logic and rules makes sense to a lot of people, maybe moreso to non-techies who aren’t thinking constantly about what’s happening under the hood. At the end of the day, though, there must be many different congnitive styles of presentation and interaction that would help different experts in different fields encode their hard-won knowledge as useful applications. As we move ahead in healthcare, we’ll be constantly layering and adjusting our tools to make the transition from the expert’s brain to the computer as smooth and pleasurable as possible.

Again, the screencast is worth the time.