The study linked in this post is Canadian Medical Association Journal (CMAJ) March 2010 published systematic review. It illustrates much less then expected beneficiary effect of point-of-care computer reminders on physicians behavior. This is not un-expected as these reminders work against how our brains work. These are mere distracters when presented when we are performing other cognitive tasks.
I avoid saying I told you so, but this is a post I posted in April 2008. It summarizes a study by Oulasvirta and Saariluoma (2004) that shows the detrimental effects of distracting readers by alerts. And I must mention, Jef Raskin’s legendary book: the humane interface, that addresses this point.
Law et al.(2005) conducted a study where they compared the decisions made by neonatal ICU nurses and physicians based on physiological data either presented in trend graphs or in text. Although participants overwhelmingly preferred the trend graph over text (29 verses 11), the appropriate decisions was made more often when data was present in text. Mean of 0.38(sd=0.14) versus mean of 0.51 (sd = 0.14) Law et al. tested physiological data as heart rate and oxygen saturation which lay themselves to being presented in graphs. You could expect that textual data is better presented in sentences and not just single words in forms.
Another big mistake is the concept of an application. Applications are programs that prevent you from using most of the power of your computer. They are walled cities. When I am using my CAD package, I am prevented from using the spelling checker in my word processor. When I am using my word processor, I am prevented from adjusting the gray scale of the lettering as I can in my image processor. When I am using my image processing program, I am prevented from solving equations, and so on. Make up your own list. Some operating systems build tunnels between applications that we can crawl through (Microsoft’s OLE, Apple’s Publish and Subscribe features, HP’s New Wave, for example), but we want to run aboveground.
This quoute deserves no further explanation. Within health care the design of workstation should be reconsidred using Jef Raskin’s recommnedations. If you have not read his book “The Humane interface” then it is time for it.
I will leave you with another quote form the same article:
Designers forget that humans can only do what we are wired to do. Human adaptability has limits and today’s GUIs have many features that lie outside those limits, so we never fully adapt but just muddle along at one or another level of expertise. It can’t be helped: Some of the deepest GUI features conflict with our wiring. So they can’t be fixed. Like bad governments, they are evil, well entrenched, and must be overthrown.
Image via Wikipedia
We all agree that formatted text and text within forms cannot convey the subtleties conveyed by freely written natural language. Yet, informaticians push for standardized text entries. Using today’s technology, codifiable and formatted text allow for easy extraction of data. The extracted data can be flexibly used in foreseen and unforeseen uses. Two foreseen uses are conducting studies and designing smart decision support systems.
On the other hand, extracting data from naturally written text (non-codifiable and non-formatted) is hard, and with today’s technology unreliable.
Is there a compromised? That is having data within EMR that can convey the subtleties of language and at the same time be flexible enough for utilizing this data for things as research, QI and designing smarter decisions support systems.
I do believe that writing naturally and conveying the full meaning of text comes first. We should wait for technology to change and improve instead of forcing people to change their writing style to a less effective style. Yet, if I am to compromise I would present the following rule:
use codifiable and formatted text entry only when the writer and reader have similar background knowledge
Follows is the rational for this rule:
As I am preparing to write my theses proposal, I was reading some of my previous posts. An Idea that I had then is worth further explanation. I need to point out to the context of my literature review than. At that time I was trying to find out reasons for the need for consistent interfaces. This led me to Shiffrin and Schneider dual information processing theory. This also led me to Michael Polanyi’s theory on tacit knowing. Reading this last post, I feel my final comment on that post needs more explanation. This is the paragraph I need to explain:
Finally, it is worth mentioning, included in our subsidiary awareness is our previous experiences, biases, beliefs, tasks and goals.(Brohm 2005) Out of these subsidiary awareness components our focal awareness is formed.
If our brains only had the two processes of information perception of Shiffrin and Schneider (1997) , than our subsidiary awareness is probably formed by automatically perceived information. The meanings formed from these perceptions are meanings we started attaching since or birth or even before that. Our previous experiences, biases, and beliefs are some of the meanings we are attaching to perceptions every day of our lives. We do need to take care to what meanings we attach. Especially knowing that it takes extra efforts to unlearn and relearn new meanings to our perceptions. And even if we are unconscious of it, these meanings play a role in our focal awareness. Shiffrin and Schneider (1977) found that automatic processes are hard to stop. Once started they usually finish. We can try to attenuate the stimulus that started them or try to start another automatic process to stop them. An example of an ingrained meaning that is attached to a stimulus is one that I mentioned in another post. In china’s stock market red means the opposite of what it means in the US and most of the world. In china red means the price is up!
Computerization of documents changes reader’s relevance judgment behavior drastically. Bochanan and Loizides (2007) conducted a study where they assessed the behavior and outcome of document relevance judgment of 30 participants that were divided equally into three categories. Within each category, participants were asked to assess the relevance of the same 20 scholarly documents to a given topic. The three categories were as follows:
1. Given the documents in paper form
2. Documents given as PDF documents
3. One page summary of documents in PDF format. These participants were allowed to download the full document if the desired to.
Although the first two categories relevance score was higher (63%) than the third category (57%), the difference was not statistically significant. Yet, the behavior of assessing these documents was quite different between these three categories. Probably the most striking is that within the digital categories, participants spent most of their time on the first page (64%) and 34% did not even scroll down the first page! When comparing the second group (Full PDF document group) with paper group, this group spent more time scrolling (15%) and less time stationary reading (17%) compared to the paper group (<5% and 50% respectively)!
Nygren and Heriksson (1992) noticed that physicians skimmed over parts of the paper medical record (P-MR) to assess their relevance, then either skipped to other parts or started reading.
Oxford’s definition of skimming is similar to what Nygren and Heriksson intended it to mean. Oxford defines skimming as to read through quickly, noting only the important points. The goal is to assess the relevance of text. In skimming we use as many clues as possible to assess relevance.(BBC) Therefore, skimming is a form of information relevance judgment. The search within health informatics literature did not reveal information on this process. Yet in library science there are similar concepts as the more general information triage and the more specific document triage and document relevance judgment. Buchanan and Loizides (2008) define information triage as the activity where a user determines the relevance of a piece of information for a particular information task.
I will focus on what is presumed the ultimate goal of reading medical records; understanding or comprehension. Reading that does not leads to comprehension is not covered here.
Understanding text is the conversion of text to a representation of it in our brains. This representation can be multilevel. The lowest is the surface drawings of letters. The highest representation is commonly called situation model (SM).(Graesser, Millis and Zwaan 1997) Situation model is a multidimensional representation of the text that is formed by the interaction of the text and the reader’s previous knowledge. The reader performs different inferences based on the text and his or her previous knowledge to create this high level representation. There is agreement on the content of this representation as proved by empirical evidence. The structure and storage location of this representation is still debatable.
Here is an illustration of the presence of situation model (SM). Read the following two sentences: (Zwaan 2003, p94)
I will divide Paper Medical Records’ (P-MR) affordances into data entry affordances and data retrieval affordances. I do realize that the divide between these two groups of affordances is not as neatly separated when compared to current computer systems. With paper, you could be reading and jotting notes at the margins at the same time. While with most of current computer systems, you cannot just add notes on the margin of forms and documents. In these computer systems data entry and data retrieval processes are separated. In this post, I will only address data retrieval affordances.
Having took the time to write posts on medical records goals and on affordance, I will re-write a post on the divide between affordances and goals. In my previous post, I used the term purpose instead of goals. In this post I will stick with goals.
The focus of this post is the study conducted by Nygren and Henriksson (1992). It is the only study I passed by that truly tries to understand what actually takes place when physicians are using medical records. This study only focused on paper medical records. Conducting a similar study to understand how physicians use electronic medical records (EMR) would be much more challenging due to EMRs diversity.
This study illustrates neatly the more elusive goal of using medical records which is as an intellectual tool. P-MR is a tool that supports physicians’ cognitive processes including decision making.
An Affordance is an action possibility available in the environment to an individual, independent of the individual’s ability to perceive this possibility.
-McGrenere and Ho, 2000
Affordance as a concept was imported into the world of design and human computer interaction by Norman (1988). He imported it from the work of Gibson on visual perception. Yet, Norman’s views differ from Gibson’s. For a discussion of these differences and a better understanding of affordance you can review this summary at Interaction design.org.
I will adopt Gibson’s view as it is the most agreed on. (McGrener and Ho 2000). Affordances are properties that can be used by an actor as a human whether these properties are perceived and realized or not. Norman discounts affordances that are not perceived by actors. While Gibson account for all properties that can be acted on whether perceived or not. A designer may design a useful device that a given user can only use some of its features. For example, paper has the affordance of being written on and being carried. Paper has also the affordance of being turned into a paper airplane whether you realize this affordance or not.
In this post, I am proposing that one of the main reasons for poor computerization of manual processes in health care is the poor understanding of computer affordances. There are valid reasons for this suboptimal computerization that I will mention at the end.