I have had a unique opportunity to work in two industries which both have the customer’s safety at the forefront of their business; however, both industries have inherent safety risks for utilizing the company’s product. This means you are actually safer if you do not use their product, which is kind of ironic until you think about the two industries. These industries are healthcare and automotive: Not so ironic anymore, right?
Obviously the goal of healthcare is to make people healthy and keep them safe, but inherently every time you step foot in a hospital, you are increasing your health risk (Data Source).
In automotive, the goal is to get a customer from point A to point B as safely as possible; however, every time you get inside a vehicle you are increasing your personal risk (Data Source).
Both of these industries are also predicting drastic safety improvement in the upcoming years and decades. The problem is that both industries are relying on technology for these enhancements and there are always unintended consequences of technology causing the same problems you were trying to resolve in the first place. Let’s consider some examples.
Healthcare information technology and electronic medical records are claimed to both improve efficiency and increase patient safety. You can do barcode medication administration, allergy and drug interaction checks, and medical decision support, just to name a few. All of these things, when properly implemented, can definitely increase patient safety. In fact, a lot of emphasis is placed on implementing these technology features for the safety improvement aspect. Technology, however, has its downside that people do not focus enough attention on. The quality of your technology and its implementation can directly and indirectly impact the safety of your patients, even when designing a non-patient safety part of the application. Mental fatigue of using systems with poor human factors or chronic software crashes and “bugginess” will take a toll on a clinicians’ ability to make clinical decisions. False positive or negative “safety features” will create a distrust in the technology while failed hardware, such as unconfigured or misconfigured barcode scanners, take away from the sole purpose of the original design or purpose.
Like healthcare, the automotive industry is seeing a huge technology shift as well. Infotainment systems and interactions with your mobile device are bringing entertainment and convenience into your daily commutes and family road trips. Hands free texting/calling and turn-by-turn navigation are all things that can improve your safety by keeping your eyes on the road and your hands on the steering wheel.
The quality of the technology and implementation though can directly impact the safety. A company could spend a lot of time and money designing the safest infotainment system with minimal clicks and user distraction, but if the system as a whole is glitchy, all the efforts are for nothing. Alternatively, if a customer has Bluetooth or WiFi issues, they are going to spend time trying to fumble with their phone to get it connected, navigating deep into the phones settings to enable and disable their connection to try to “reboot it” – all while driving. If when they plug into the USB the phone isn’t recognized by Apple’s Carplay or Android Auto, the customer will be checking the cable connections, maybe even resetting their phone, to try to get this feature (intended to make your drive “safer”) working again. In cold weather, if the touchscreen on the car infotainment system doesn’t respond timely or at all, or has a delay with the phone it is interfacing with, then the driver will be distracted trying to get it to work. So that “perfectly designed system” has not changed the overall risk because the other aspects of technology are still causing distracted driving.
AAA does yearly research on distracted driving and they recently found that drivers were distracted for an additional 27 seconds after interacting with the car/phone/infotainment system (Data Source). These statistics are on working systems and do not take into account the distraction of a driver after they have just been messing with a technology failure, full of frustration and anger.
Items Leading to Poor Quality
As briefly highlighted above, there are an infinite number of things that could lead to poor quality and ultimately cause an opposite effect on the safety of your system. Here is a summary of a few things that I see a lot and will probably expand upon in future posts.
- Focusing on speed to complete a new feature request
- Lack of interaction knowledge (i.e. lack of “systems thinking”)
- Misunderstanding of human factors and mental fatigue caused by designs
- Downplaying small changes (related to lack of systems thinking)
One way to mitigate some of these is by having a well-balanced measurement system. If speed to end of development is important to you, figure out how to measure quality as well.
Post in the comments some of the things you have experienced that have the intention of creating a better system but in turn has the opposite effect. What are some ways you mitigate these things?