AIHI Seminar Series 2017 – Professor Enrico Coiera

AIHI Seminar Series 2017 – Professor Enrico Coiera

Articles Blog


Welcome to this Australian Institute of
Health Innovations Seminar, I’m Johanna Westbrook, and it is a great
pleasure to introduce Professor Enrico Coiera today. He really doesn’t require any
introduction to this group but nonetheless, many of you will know that
Enrico is a leading national and international researcher
in health informatics. He’s written the bible
on health informatics, which is, if you haven’t
read it you really must, it’s a fantastic detailed
textbook on health informatics but it’s incredibly
readable as well, which is a great benefit. So it’s really worth reading. It’s cheap as well, okay,
it’s cheap as well. Enrico has been running – as
we know – the longest health informatics research
centre in Australia. He’s made an enormous
contribution to all sorts of areas of research but one of the areas
he’s probably most renowned for is his really insightful, pithy pieces that appear in
the BMJ or some of these leading journals about new
ideas in health informatics. And he’s often one step ahead
of the curve of those areas. And many of you will
know these pieces, which are really very highly
cited and have shifted the direction of health informatics
in many different ways. So it is a really great pleasure
to have him speak today on I’m sure, another
really bright idea. Enrico. So nothing to live up to, then. So normally I’m the one
doing the introductions, and it’s– your hardest job is to think
of the hard question at the end if nobody asks
the question. So you’re off the hook. Joe, you don’t have to do that. What I thought I would
do today is to– today’s an ideas talk,
and so I apologise for that. And to let you almost
into my head a bit. And again, I
apologise for that. This is sort of the way I
understand informatics works, and why things don’t always go
the way we’d like them to go. And what I’d like to give you
are two models through the talk – and I hope they make
sense – that might maybe help you think about things
as you progress. So this is sort of the lay of the
land in Informatics, I think. Up here is the Shire
of Standards, which is a beautiful
green, quiet place. Everything is in order. There’s a place for
every standard and every standard
has its place. We build beautiful
systems that work, and it’s just a lovely,
digital Nirvana. What happens, though, is occasionally one or two
of the folks in the Shire go and cross this thing called
the River of Implementation into the rest of the world. And then things
start to change. You might be stuck in the
rocky Workaround Mountains. So these are the places where
people will take the tools you’ve built and go out of
their way not to use them in the way you designed them. You might get stuck in the
Human Factors Marshes, which is where people will
use your system exactly the way you designed
it, and cause trouble. And if you’re very unlucky, you’ll end up in this
dark place called Mordor, in the view of the
giant public eye. Right? And you don’t want
to be down there when things are going wrong. So why is that journey
often so difficult? And are there other journeys? Are there other maps?
Is my question for the day. And so I thought given that,
as Joe has pointed out, I’m now very old, I would go
back to a couple of things I’ve said in the past
just as signposts, so. This is something I said
when I was quite young. Kind of a bold statement that
informatics is as much about computers as cardiology
is about stethoscopes. The idea that the technology
is not the point from why– of why we’re here. And also, any attempt to
use technology will fail if our motivation
is the technology and not solving problems. I think the thing I would
change today is change the word any because that was pretty
absolute – when you’re young everything is absolute –
to say almost always. The next thing, though, is to say that that’s not
necessarily new thinking. This is Chuck Friedman’s
fundamental theorem of informatics and Chuck is another
great thinker from the US. And this is not a statement
of what will happen. This is a statement of saying, Informatics is going well when
people plus computers are better than people unaided. It’s a statement of intent, and
people often misunderstand it. So both those things really
are around saying something about technology needs to make
us do things that are better than otherwise they would be. But I want to contrast that
with something else I wrote quite a long time ago, too, which is to reflect that
technology doesn’t really always cover everything that we do. So here I talked about
something called a communication space,
which I asserted then, and I think is right, is the
biggest part of what we do. Most of what happens in terms
of information in the health system is between
people, still. It’s where we talk
to each other, where we make sense of the
world, and we interact. The electronic record doesn’t
mediate all of that. And so where does that
fit in the world of Chuck Friedman’s theorem? That’s almost
ignored, isn’t it? So those are the two
worlds I’d like to try and reconcile today.
So let’s go on a journey. The first thing I’d like to
talk about is this notion that information is going
to make things better. Adding information to the
bucket of healthcare is going to make
healthcare better, right? A little drop of information
is going to make all our processes better than before. And I guess if a little
information is good, then a whole lot’s got
to be better, correct? Just by definition.
So Joe mentioned the textbook, and I’ve summarised the
textbook for you in one slide. But it’s really worth the
journey of buying it, really. So this is really a summary of
all the better analyses and systematic reviews across all
the major health informatics intervention classes. And it’s worth spending time
to understand what they say. Electronic health records
decrease nurse time but increase doctor
documentation time. So whenever you look at
the docs spend more time documenting than before,
records are more complete, but we can’t typically
demonstrate any effect on quality and safety. Repeatedly.
Doesn’t happen. Automated care
pathways and plans, very good at the reduced
practice variation, and we don’t like
variation typically. We find increase
compliance with standards, and we can see improvements
in process behaviours. So test ordering more
in line with standard, or drugs ordered
more appropriately. But again, typically no
impact on clinical outcomes: length of stay, death. Typically.
You’ll find some papers, but overall that’s what all
the systematic reviews say. And we have to kind of
take it at face value. Let’s look at telehealth, now telehealth makes
patients happier. In some cases, there are
improvements in outcomes. In chronic care, for example,
chronic heart failure, etc. It’s surprisingly
not cost effective. In fact, there was a paper just
in Health Affairs yesterday that made the same surprising
result that we’ve now known for a while: that telecare
doesn’t necessarily improve cost effectiveness. And then lost at the bottom
here, lost at the bottom, this little thing called
decision support, which we never talk about
much, is the only class of intervention that
always improves outcomes, always improves safety and
efficiency when tested in the med analyses.
It’s interesting, though, when you reflect on where all
our effort goes in eHealth, where we’re investing.
It’s interesting. Okay, so faced with
that conundrum, how do we start to bring order
to that and make sense of it? Is this bad, or is this in
fact what we would expect? And so an idea that I like is
this notion of the value of information. So if you do clinical research
you know this idea of the number needed to treat. How many patients must I give
my red pill to before one patient gets better?
Number needed to treat. There’s another idea
which is similar, which is the number
needed to read. How many times must
somebody read this document before something changes? If it’s a guideline or
an electronic record, how many times does this
guideline have to be read before something
changes in the world? There’s a thought. And so this led me to kind
of try and unpack it. So this is a very
simple deconstruction of what happens when we use
an information system. So we turn it on.
We interact with it. Say it’s a medical record, we might receive
some information. Sometimes that information
changes what we do. Patient’s allergic, I’m not
going to prescribe that drug. Hopefully, those decisions
result in some change in what happens in care, but that
doesn’t always happen. I might order a drug, and it
may never get administered. And then at the very end there, something might change
in terms of outcomes. So there’s a pipeline, and there’s a loss as you
go down that pipeline. And so the value of information is very different
at each stage. So reading an electronic
record sort of happens here, but it’s unlikely to be as
important as the information that tells you to
change a care process. The other thing to note
is that there are– it’s busier over here
on the left, isn’t it? Many events, many times I
interact with that record. Very few events where
outcomes are changed. But equally the further
you go down that chain each event is much
more valuable. Just to give you an idea, so here is some work by
Wouter Good – he was actually a visitor here at the beginning
of the year – who took that model and he tried to see if
it would help him understand how they were going with the
decision support system. And the point of this decision
support system was to help clinicians understand whether
their practice was in line with the recommendations
for cardiac rehab. And he broke it down
into those steps. You log into the system. The decision support system
gives you some information about how well you’re
doing compared to others. You work with the system to
agree a plan and the things you’re going to change in
your clinical behaviour. We then go and
measure the changes, and then there might be
some outcome change. And Wouter’s question was
why did this system have no impact whatsoever
on clinical outcomes? And so he took the value chain,
and he just deconstructed it. I’m sorry for the full
adding challenges there, but you can see just
in terms of frequency, 50 feedback sessions, lots of information about
where people were not meeting their indicators. Clinicians agreed to
change their behaviour nearly 380 times. They only actually
did it in 31 cases. And surprise, surprise
no change in outcomes. So where is the problem? It’s probably not here in the
decision support system, but it’s something here in
terms of integration with workflow and compliance. So this chain gives you at
least an indicator of where things are going well
or not going so well. Now from my point of view, that’s great but not
really that useful because there’s still
no value attached. That’s just a set
of frequencies. Those 31 actions that were
completed might have actually been really important. So it’s important to
not look at frequency but this idea of the true
value of information. And so this is my
only technical bit, and it’s easy, really.
Go with me. So this is what the
technical definition of the value of information.
So you’ve got a decision. A patient’s got a symptom, and I want now to do a test to
see what’s wrong with them. I’ve got two tests. And which one is the
one I should use? And so what you do is you
calculate in decision theory what’s called the utility, or expected utility,
which is two things. It’s how likely is it that I’m
going to succeed in finding that thing or how
likely is that event, and how valuable is
it should it occur? To give you an example, if you’re a climate change
sceptic, and you say, Look, the likelihood of
climate change happening is really quite small. Decision theory would say,
Okay, that’s a very small p. How big is the value
of getting this wrong? And it’s kind of, well,
you’re betting the planet. So even if you’re a sceptic, decision theory says you
better do something because the price
is pretty high. Yeah?
So that’s all it is. It’s very simple. And so you calculate your
expected utility of value for both of those
possible tests, yeah? And so technically, the value
of information is this: it’s just the difference
between the two tests. So how much value was there in
doing the chest x-ray versus doing the blood test? Well, I might have
found 10 more people, and that has a real
value in the real world because I can do
something about it. So there’s your technical
value of information. This is as hard as
it’s going to get. So that then allows me to
give you your first model. And what I’ve done here
is I’ve taken the chain, and I’ve put it down
the bottom here. These are just steps
in a process. And then I’m just saying, Let’s measure the value of
the system at any one time. And just to keep things easy, I’ve said, This is just the
world as it is without the computer in the way. And so the first thing
it does is give us a way of plotting those
things I started with. So that’s Chuck Friedman’s
fundamental theorem up there. That’s using computers
to make things better. And that’s my old
communication space down here. That’s where technology doesn’t
really help us so much. We’re doing what we do anyway. So what I can do now is for
each of those five things that I’ve talked
about, records, etc. Plot them here. So let’s plot the
electronic record. So we know the electronic
record is a pain to use. Doctors spend more time. There’s a disutility in
engagement with the record. We also know – and I’m
being generous here – that there’s probably very
little benefit in terms of outcomes down there. I’ve given it some because
it’s just got to make a difference but it’s
very hard to measure. Let’s look at some of the
other interventions. Care pathways, they
really– well, they make things easier to use because they reduce
interaction. Things are all in
bundles for you. You have order sets, etc. And the big value is
here in process changes, not so much here. But that’s still great
because process changes might save dollars. So you might use them for
a very different reason. Teleconsultation, I always get
surprised when people say telemedicine makes no difference
in terms of outcomes. Well, it’s meant to be a
replacement for a face-to-face, not an improvement. So its job is to reduce the
cost of accessing somebody, reducing travel time. You can probably access
people in the evening when they’re not
otherwise available. So its benefits all come
here around access to care. And that’s not a negative. That’s exactly what you’d
expect. Decision support.
Traditionally, anyway, decision support
really wasn’t popular because there’s a
real disutility or a cost to using it. If I’ve got to answer
questions from the computer: the patient has x,
the patient has y. But what’s happened over
the last few years is that we’re finding ways of
embedding decision support so that it doesn’t
really cost so much. And it really has
huge benefits, as I’ve said, down the track. So I’d like to think that
when we look at technology and whether it makes
a difference or not, we should be thinking
about where the benefits and the costs are and asking
appropriate questions. I would argue that anybody
who tries to demonstrate the impact on outcomes with the
electronic record doesn’t understand the question they’re
asking because it’s so far away from outcome why would
it make a difference? It’s an enabling
technology, yeah? Other things need
to work with it. So let’s just summarise, then. So information only has
value when it changes what we do for the
better, right? There’s got to be that
number needed to read. It’s got to do something. Lots of information in a giant
bucket achieves nothing. I think those value
signatures work for me. And other people seem
to like them too, as a way of understanding
the nature of different interventions. The electronic record as I’ve
said is just too upstream from decision making
to easily see changes. And so we should better see, I think health is solutions
bundled on top the record etc. And just by old bugbear that
we really need to be talking actively about decision
support technology, which is just lost endlessly
in the discussions around digital health equals
electronic record. Okay.
Is that all right? So the next thing I want to
talk about is this idea that human work can somehow be
replaced by the computer. The assumption that the things
that we do are automatable. Well, I’ve got to say, it’s increasingly the case the
things we thought were not are, clean news now can
be written by bots. Certainly, Drybar is bot. So there’s this notion that
work is linear and you can just turn a piece of it into a
machine and that’s what the Friedman Theorem says. The thing that we’ve
noticed, when you do that, is that people rebel. Don’t we? We rebel, we work around it, we don’t like that
piece of machine, or doesn’t quite do
what we want to do, so we do other things there. And that, I would argue, is really enabled by the tools
that I would place in the communication space
– talk, paper, all sorts of other
things that people have, sticky tape on walls. You probably have seen this
one, but I just love it, it says, this is where the idea for
the record starts to get a little complicated. You’ve seen that one
before, I love it. And so, as Jeffrey would say, that area is work as imagined, and that’s the work is done. So things get messy
once you start to try and map things to actually
what really happens. When you leave the shire, and you go into the
world of workarounds. So, and just to
give you an idea, here are some photographs I
took a few years ago with [inaudible] of a New
South Wales hospital with a Cerner system, which I hope has been
updated since then. And just to point out what
we mean by workaround, so there’s the lovely, very expensive Cerner system. There somebody has
stuck on the wall, well what do all
the icons mean? Because the icons have no–
they don’t mean anything. An icon is meant to represent
something obviously and easily, but clearly, the designers there didn’t
really care much about that. Look at this one. Protect your security. Don’t forget to log off. What design failure is that? That it’s so easy to walk away
from a system and leave it live and let somebody
else log on as you. That is kind of a telling
failure of system design. Oh, and just in case,
here’s the manual. All right, so what
I like to say is, Just walk around
your GP’s office. Just walk around any ward or
any other place you go and look for the bits of paper.
Because the bits of paper tell you where work as imagined and
work as done don’t match. So why does this happen? Number one, typically because
designers often confuse the work they imagine we do or the
work we tell them that we do which is still what we imagine
actually happens with actually what really happens, and streamlining
things is great, but often there are variations
that happen locally that are there for a reason, and by cutting them
off you do things that you may not anticipate. Designers always assume
that the computer has your full attention. I’m sitting there, tapping
away at my Cerner system, except in the hospital,
I’m being interrupted. My beeper’s going off. A colleague is talking to me. I’m worrying about the patient
I was asked to see down in the emergency. I’m cognitively loaded and
doing lots of other things. That machine does not
have my attention. The other thing is, in reality,
work just is not linear. It’s not a set of
predictable steps. And there are often dependencies
that we don’t anticipate. A good colleague of
ours, Marie Bismark, looked at the use of electronic
prescribing and was looking at what happened when you move
prescribing from the bedside to giving a prescribing station
which was in the ward area where all the records were. And she noticed that by moving
the electronic prescribing from the bedside to
somewhere remote, you lost something that
people didn’t think about, which was that at the
bedside the clinicians, the doctors and nurses, would discuss the patient and
the data at the same time. They would discuss what’s going
on in or around whatever. By decoupling prescription
from being present with the patient, you
lost an important part of the process that was
never imagined at the time. Okay. So one of my messages is that
workarounds are gifts to us because they signal there’s
a mismatch between what we think we’re doing and
what people really need. So workarounds are repairs
to our faulty designs. They suggest information
that’s missing that should be there. They suggest new
pathways or tools that we haven’t imagined. They are ways of also
of making us understand that work is shifting. So you can design the
system for 2017. Come 2019, patients
have changed, work has changed,
treatments have changed, and the system hasn’t kept up. And what people will do
often is they’ll resort to communication tools, emails, chats, bits of
paper stuck to the wall to circumnavigate
what’s happening. So I do remember early on
when I was working in the IT department back in 1850 at
Royal North Shore Hospital, to be really– I thought what is all
this stuff on the walls? And can’t we just take it off? It’s an affront to my need
to have everything clean and organised. It turns out that my office
is not as good as Jeffrey’s office because Jeffrey’s
office is full of workarounds everywhere. So this leads us to really
the final part of the talk and trying to bring
those ideas together. This is not a small question. Which tasks should we automate? We never really ask
that question. We just presume that
there’s work out there and that technology can come
in and make things better. We never actually say, No, that’s not really a
good object for automation. We just dive in, and then we succeed or we
fail without that reflection. So there have to
be, by definition, the sorts of processes where
technology is the winner, and the real– great limiting
step is the human. Obviously, there are things
where computers are good at which we’re not so good at, but equally, there are a
whole bunch of things where technology is not king. We are in fact the ones who
are much better at it and technology is the bottleneck
for getting things done. So how do we try and put those
different worlds into some sort of model? And so this is the model I
propose, my second model. So I talked a lot about
expected utility, and I talked before
about trying to measure the utility of a technology. And so that’s this line here. And I’m saying we’ll also just
imagine for any given task there’s a utility for
a human doing it. In other words, how well
does the computer do versus how well does
the person do it. And let’s just plot those out. So the first thing you
can notice is that there’s obviously a line, and everything above the line
is where computers are better than humans and
everything below the line is where humans triumph. So let’s now just go through
each of those spaces because this then is the geography
I want you to think about. So what’s happening here? Computers are fantastic,
people are suboptimal. This is calculating pi to
5,000 places in a second. We’re just not
going to do that. Auto-trading on
the stock market, we’re just not
going to do that. Interpreting complex 3D images. Computers are better at that. So there will have to be a
whole bunch of things where rationally it makes
sense that we automate. Down here is a
really bad place. This is where neither
humans or computers are good at things. And right down the bottom here
where people are very bad and computers are bad,
I call it a dead zone. That’s probably
Mordor, isn’t it? There must be tasks in that
place and we’re just– there’s not much we
can do at the moment. And we can slightly
improve our performance in what I call the
mitigation zone. This bit here, bottom right,
where people are great, technology is poor is really
what I was always talking about when we talk about
the communication space that’s where we win.
And what do we have left? What we have left is what I
would call a joint workspace. This is where humans and
technology can both add value. Above the line is the beautiful
world of Chuck Friedman. This is where the computers
make us much better than we’ll be on our own. And down on bottom is the
world where we know more. This is where workarounds
happen, right? So technology’s an
aid, but it’s just– we’re a bit smarter. And so really it would
be great to know, before you started
what you’re doing, where you were going and where
your task set fits because that really dictates how
you choose to approach it. And just to kind of now bring
the two models together– so I had those
signatures for you– the signatures for
electronic records. And this is sort of the
space of where things go– well, they go but
don’t go so well. So what I would love to see,
say in the next edition, which I will write in 15
or 20,000 years’ time, is to see this sort of a
diagram where we can say for a given technology,
how well is it doing? Where does it fit in the space? Clearly right now there
are some real deficits, and we need to change them. We should be able to
predict, or if not predict, at least then measure after
the fact to make really informed decisions. And I think with that
perspective you really change the way you approach the
evaluation of technology. And you change the
approach of design. At least I hope I do. Anyway, I think that’s enough
of being inside my head. And I’d just say, If you
know where you are, then you’ve got a better
chance of knowing where to go. So the idea of the compass is what I’m hoping to get to
you. Thank you very much. That’s exactly what I’m saying. That workarounds are clues
to designers about mismatch. It doesn’t have to. A workaround might say,
today we do it this way. Stop doing that old bit.
Swap modules. So it might simply
indicate a change of work. Another approach is to say there
will always be a piece of work that needs to
be locally adapted, so let’s give you the
tools to do that. Let’s embed a bit of local
adaptability into the system. So if you today go to
any hospital and say, I’d like to change
this box to here, you can’t do that as a user. You’ve got to negotiate a
change request through the IT department, who will put it onto a
list of 5,000 priorities, and it’s unlikely to happen. So I think accreting changes
without understanding that they’re adding the
complexity is a bad thing. So you have to actively
manage complexity growth. I completely agree. So I didn’t say
they’re not useful. I said it’s hard to measure
impact on outcomes. It’s a different thing. So the way I see it is that
they are core infrastructure. So it’s like having roads.
You have to have them. However, you wouldn’t say
you’ve got a transport system if you’d only built roads. So you’ve got to add the
other stuff on top. So IBM Watson, sort
of set it aside, because as academics
we actually don’t know what’s inside the box, and we’ve rarely
seen evaluations. So at least some of
us are sceptical. It might be more
marketing than reality. But if the question is, are we moving to a place where
there’s more decision support, there’s a lot of
interest in it now, which is fantastic. But those technologies
have been available since the mid-80s, so
30 or more years, and were not adopted then, so the things that have been
well adopted are things that hide behind the scenes. So path reports all are
screened computationally. ECGs can be read automatically. Radiologists will have systems
helping them look for lesions. So anything that is– or you might get alert prompts
or reminders which don’t require you to put information
in but respond to what you do. Those are the sorts of
things that we’ve seen, and they probably reflect
the fact that people are unwilling to give information
in return for support. So I think half of the anxiety
I’ve got is that there’s a lot of excitement now
around decision support, machine learning. People talk about AI
but they’re not– they’re just talking
about algorithms. They certainly can
perform well, but they’ve always
performed well. The issue really is how
does that fit with work, and that hasn’t changed. So I think you are exactly– we’re on the same
page, actually. So what I would say is people– there was a thing called the
large-scale demonstrator in the UK for telehealth, which essentially killed a lot
of telehealth in NHS because they said based on narrow
judgments there were no improvements in outcomes
and it was more expensive. But they didn’t factor in a
lot of the things that you spoke about, which are
the unmeasured benefits. And so I think my view is it’s
actually an unfair thing to criticise a record because you
don’t see outcome changes because that’s not its point. And I would say it’s unfair
to criticise telehealth because again, it might
be more expensive. I mean, so if you go to
that Health Affairs paper which came out yesterday, they don’t cost downstream
benefits at all. They don’t cost the person
with an obscure infarct that’s happening now and they send
straight to emergency, which would have cost us
hundreds of thousands had that been missed. So I think we are limited by
what people will measure. Look, I think yes. But going back to that idea
of number needed to read. So there are two ideas. One is can you make
the HR easier to use? I think the answer
might be yes, and can you increase the
likelihood that if there is relevant information it
gets seen and acted on? Yes, you can do
that, but still, the point is how many times
do I read the record before I actually change something? Because most of the
record is very routine, and so that’s– it’s just simply a
question of numbers. So I think when people–
all people do is say look– when they do the evaluations
the records are placed in the system and then they look at
benefits on process at the end in terms of length
of stay, death, etc. I think they presume that the
input-output is going on. They’re not measuring
it at that micro-level. You’d have to do a study, which
would be great for a PhD. Although in that paper they
didn’t measure outcomes in the end, so they didn’t even
go to the end of the chain. So I would say, Well that’s a
premature assessment. You can’t really
say much at all. Yes, it cost more, but maybe
that was well worth that cost. It’s the same argument
for having GPs. People say, People are wasting
their time seeing their GP. Well, for every early
tumour detected think of how much money
and suffering is saved. So primary care is a great
example of huge cost benefit. So I think second part first. I mean, I think that the value
chain is a very simple idea restated, I think. It’s just a simple model but
it’s almost a checklist for looking at literature,
evaluating what you read, or guiding your own thoughts
about the way you approach the decision to intervene. As I said, it’s actually
a decision to intervene, to build something. We shouldn’t just assume
we’re going to do it. So I think it seemed
like a useful framework. We’ve used it a few times
now for some of our work in our own centre,
and it does really– what it often does is show
huge gaps in the literature. Okay, people have actually
only measured these two bits, and they’ve all forgotten this
chunk here in the middle. And, I mean, in terms
of growing complexity, I’ve got some views
around complexity. It needs to be
actively managed. That we just don’t add
stuff in without deleting. And I think there was a
question early on about how variation adds to complexity. So we’re not very mature at
developing ways of allowing people to vary
practice appropriately in a controlled way. So we have the universe of
everybody has the variations that they want, or here’s the standard
you should comply with. What we don’t have is the
adapative piece in the middle which says is, Here’s the piece we all agree
probably we all should do. If you could then now adapt
around that according to the following well-behaved rules, I think we have a chance
of controlling it. It’s a beautiful point and you remind me of that classic
book by Don Norman called The Psychology of Everyday Things
or Design of Everyday Things, and he talks about the
handle on a door, and he says it should be obvious
when you get to a door, if you should pull or
push based on the handle. So the design should tell
you what happens next. And you think about that
lovely Cerner system that showed up, I mean, how much information
did you need to have to use that system? The design didn’t tell you
what had to happen next. So if you look at Apple products,
how beautiful are they? And you compare the money
invested in design there for that commodity
product with the stuff that we have to use clinically. So there’s a real lack
of value on good design. And I think until – and
there must be something about the healthcare
marketplace that makes it so – part of the answer I think, is that we don’t
value beautiful, clean, simple design. The other part is that we all
want to do things differently, which goes back to
Jeffrey’s question, and I think you
just have to say, Look, practice is indeed not,
this is not a toy at a factory, there are differences but let’s
manage those differences in a way that at least,
is well behaved. I don’t know if
that’s an answer but. Well, I think it’s very clear. I’m sure everyone would agree
that you delivered on the promise of presenting some
really interesting ideas, in a really excellent talk.
So thank you very much. Thanks, Jo.

Leave a Reply

Your email address will not be published. Required fields are marked *