Posted by: hughleslieMD | July 5, 2012

Windows 8 first impressions

I love being on the bleeding edge – it’s taught me to keep good backups and has given me some grief but it’s also a lot of fun.  I ran the Windows 7 beta almost from the first release as my day to day operating system and was amazed at how well it performed.  As soon as I started using it I just couldn’t stand using XP as it felt like something out of the dark ages.

I have been keeping a close eye on Windows 8 and reading a lot about it – lots of people are critical of the big interface changes which I think reflect some smart and pragmatic thinking inside Microsoft.  Mobile computing is here to stay and the PC is on the decline.  If Microsoft want to survive and thrive in this space, they need to inovate and not just at the edges.  Windows 8 (and the recent announcement of the Microsoft Surface tablets) is a strong answer to this and a bit of a gamble that I think will pay off.

So I took the plunge yesterday and installed Windows 8 over my current Windows 7 install… (after backing everything up of course!).   I was pleasantly surprised at how quick and easy it all was and how quickly I became used to the new GUI paradigm – the Start button and menu has been replaced by a ‘Metro’ screen with large square icons.  There is now a whole new ecosystem of Metro style apps which are designed as touch applications predominantly, but still work well with a keyboard and mouse.  There is a seamless transition between this and the desktop which is really very familiar to anyone using Windows 7 (with some new twists).

Its really fast – feels faster even than Windows 7.  I am running a 3 yo Dell machine with 4GB Ram and a core 2 Duo Intel P8400.  This is relatively old tech but performs really well with Windows 8.  There are a whole lot of new ways of doing things, but I am finding that this is easy and I generally like how it works.  Simple things like getting the Office DVD ISO, realising that Win 8 recognised it as an ISO file, double clicking it and finding that Win 8 mounts it as a DVD drive and allows me to install directly from it.  No more 3rd party software to mount and run an ISO file – magic stuff.

Everything seems to run fine including drivers for Windows 7.  I am just starting to explore the metro apps, and I’ll post more as I find it.

So far I really like it…

I have recently returned from a short trip to New Zealand where Heather and I spoke at a joint HINZ and HL7 NZ meeting about openEHR and its practical application in eHealth.  The main speaker at this event was Ed Hammond who is one of the father’s of HL7 and who was speaking on the ‘killer application’ for eHealth.  Ed also attended a full days training in openEHR that we did for HL7 to a very technical audience and we had some interesting and positive discussions about the place of openEHR in getting clinicians involved in creating content for EHRs and messaging.  The openEHR message was really about a cohesive and pragmatic approach to creating clinical content that could be reused in any eHealth context, whether it was a message like CDA or an EHR or EMR.

This year, we have also been working with NEHTA (Australia’s national eHealth program) to improve their tool chain approach.  NEHTA like many jurisdictions have been concentrating on producing CDA specifications in the last couple of years.  There is nothing wrong with this unless you begin with CDA as the specification for your eHealth clinical framework.   What needs to be recognised, is that CDA is only one of many possible serialisation outputs or artefacts that may be needed for a comprehensive eHealth program.  In Australia, we need HL7 V2 messages, CDA, clinical repository content specifications, Archetypes,  XML schema,  GUI specifications, documentation and in the future possibly the new HL7 FHIR resources or indeed many other possible artefacts.  If we start with any of these as the basis of our eHealth framework i.e. CDA, then we are stuck with that artefact and it’s very difficult to move to anything else.  NEHTA have realised this and are taking a different approach and New Zealand’s national program have also decided to follow the same or similar approach (http://www.ithealthboard.health.nz/content/national-health-it-plan).

So what is this approach?

Its starts with building a set of logical clinical models that are shared and governed at a national level.

Firstly, these models need to be able to be understood by domain experts because it’s the domain experts who need to agree on the content.  Any approach that has technical experts agreeing on clinical content is doomed to failure – I see this happening all the time at HL7 working group meetings.

Secondly, the models need to be computable.  Once the content of the models is agreed to by clinical and domain experts, they need to be able to be machine processable to be able to produce all of the technical content that technicians need to be able to create software, message specifications and everything else that a complex eHealth environment needs.  The artefacts need to be able to be utilised by common tools without the need for understanding very complex and abstract specifications like openEHR or the HL7 RIM.

Australia, New Zealand and many other parts of the world are now using openEHR Archetypes as the approach to developing all of these clinical models.  You can see this at work here at the Australian NEHTA web site: http://www.nehta.gov.au/connecting-australia/terminology-and-information/clinical-knowledge-manager

This is designed to be a very pragmatic, cost effective and implementable approach to sharing clinical content.  It’s designed to be easily implementable by vendors in real world situations.  In the next few weeks, this blog will be exploring this approach in more detail and giving concrete examples…stay tuned.

Posted by: hughleslieMD | June 15, 2010

Why don’t messages solve the interoperability problem?

If you are starting out in health information technology, it can be very overwhelming in terms of the sheer number and complexity of the standards that are out there.

Having been active in this space for more than 15 years, I have come to realise that the issue is not about standards or applications as such, but how we think about them and use them.  The current approach to solving the EHR interoperability problem is to go out and build or buy an application.  This is easy and people often suggest that open source applications are the answer but open source applications don’t solve the problem of interoperability.

Why?  Because the problem is not the applications, (over 7,800 clinical apps in the USA alone), but the DATA.  Even open source health applications use a data specification that is unique and the data is captured in a non standard way that is not shareable with any other systems.

The approach to this to date has been to treat all of these applications as ‘black boxes’, where the content is not important.  The energy has been spent on messaging between them and this has been going on for 25 years or more.  In terms of interoperability of complex clinical data, it has been a major failure and we are almost no further ahead with a messaging approach than we were before.

Why?  Because really you can think of each of these systems as having their own ‘health language’ – designed to capture the data required for the particular clinical purpose that it was built for.  Even systems that are built for exactly the same purpose, will have different ways of structuring the data (the information model).  There are many systems that allow you to structure data on an implementation basis, so that even between different implementations of the same system, there is no interoperability.  A lot of data that is captured by applications, relies on the user interface of the particular application for the semantics of the data i.e. the data  captured is relatively meaningless unless displayed in the context that it was created in on the screen.  Messaging data like this requires a constant translation of the information from one system to another and when you add a third system, you need to do the translation again for each of the other systems because the language is different.

So how do we solve this problem ?

  • We could use only one application from a single vendor everywhere…and there are some vendors who would advocate this approach!
  • We could rewrite every application to use a common data model – ideal, but not realistic in the short term.
  • We could try to invent a messaging system that was able to be translated automatically  – this is the HL7v3 approach and has been unsuccessful so far.
  • We could create a ‘lingua franca’ that allows a single translation to a universal interoperability language so that any translation only had to happen once – this is the openEHR approach.

Word Cloud
So – its the DATA, not the application that’s important and until we can get data that is universal, shareable, flexible, completely open and able to cope with the fractal complexity of health data, we are not going to be able to solve it.  There is a very nice report on semantic interoperability from the European Commission called the semantic health report that suggested that three things were essential for interoperability to occur – a shared information model (such as the openEHR reference model), terminology (such as SNOMED CT) and content models to bring these together (such as openEHR archetypes). You can read the report here: Semantic Health Report

…and if you don’t have a way of clinicians (domain experts) being able to contribute to what the content of an EHR should be, then you won’t get there either.

Posted by: hughleslieMD | May 21, 2010

Why don’t EMRs solve the EHR problem?

The problem with almost every clinical system ever built is that the underlying core is based on some fixed data model that has tried to make concrete some part of the clinical domain at some point in time. Often this is a developers attempt to translate the needs of the clinician into a technical environment.

Why doesn’t this work?

Because as soon as you try to nail down the clinical content, then it all needs to change due to changing understanding of health care, different needs of different clinicians etc etc. This means complex and costly engineering and it can never keep up.

We need to realise that the EHR is not the application, its the information and that an EMR is really just one view of an EHR. Another view of the same EHR could be a PHR or some other specialised view of the EHR.   To make this work, we need an approach that allows for flexible, computable clinical information that can change without the need to re-engineer systems and allows clinicians to define what they need rather than relying on technicians.

There are two main approaches to this in the world today:

  • One is HL7 v3 which is a messaging standard but is also being used to build EHRs in some places. The main problem with HL7 v3 for clinicians is that it is not approachable in terms of defining content in a non technical way.
  • The other approach is openEHR (ISO 13606) which is an EHR standard and is approachable for clinicians as it separates the technical domain from the clinical domain. Systems can be built once, and then new clinical concepts can be introduced without re-engineering the system. Clinical concepts can be defined by clinicians without having to understand any of the underlying systems. See: www.openehr.org and www.openehr.org/knowledge

If we don’t understand this problem, then we are never going to see those outcomes from the computerisation of healthcare that we all know is possible.

Dear all,

Patient Care WG does need help from your collective memories and capabilities to track back in minutes and other documents.

We are currently evaluating the core parts of Care Provision (D-MIM, storyboards and interactions, Care Statement, referral, query and care record, care structures).

Those who implement it and want to participate in an interview about your experiences, please let me know, so we get what you need to be changed.

This week PC WG worked hard on the project planning around all this. One decision made is to prioritize the care statement R-MIM part. This was derived from the clinical statement around 2005. However, several changes where made from the clinical statement pattern 2005 – 2007 to the PC Care Statement. It is exactly this where we
need your help for.

The important question we have for you is:

Do you remember what the exact use cases or reasons where to make changes / adjustment to the clinical statement to adopt it in Care Provision Clinical Statement?

This might be in area of: because of use case x we included / excluded class X form choice box.
because of this we made changes to the recursive relationships etc.

Some of this will probably be in the narratives of the D-MIM / Care Statement.

We need this input in order to determine if we can simply replace Care Statement 2007 version with the current Clinical Statement 2010 version, or that we need to add additional features or constraints first, or that we just include use case by use case features from Clinical Statement 2010 and rework the 2007 version.

Categories