Dangerous Driving

As a driver, you’re in control of over a tonne of metal and glass hurtling along at high speeds – a lethal weapon as a glance at the relevant statistics illustrates. In this post I want to ask – is stuffing cars full of potentially distracting – and manipulable – technology a dangerous move?

Concerns are abundant. For example, Samantha Murphy’s post talks about Ford’s updated SYNC system – this was designed as voice-control communications system with myriad applications. It would allow drivers to, for instance, activate their mobile phones (and dial numbers) by voice, as well as read out text messages to the driver; this would in theory allow greater focus on the road. In their own study, Ford deemed SYNC to have ‘significantly reduced the level of distraction’ faced by drivers.

It was indeed originally the use of mobile phones whilst driving which sparked this whole debate. Do innovations like SYNC, which aim to do away altogether with the manual operation of technology whilst driving, solve this dilemma? Perhaps not, as an old THINK! radio advert emphasized: “It’s hard to concentrate on two things at the same time. Think! Switch your mobile off!” So even with ‘hands on the wheel and eyes on the road’, talking on the phone carries a risk – it can cause ‘dangerous mental distractions’. Psychologically, it can require greater concentration than talking with someone physically present in the car. Speech is not the only distracting area of in-car technology; often, simply inputting or clarifying an instruction to the tech system can ‘create a level of cognitive demand which makes it harder to attend’.

Source: newsroom.aaa.com

SYNC’s latest incarnation has garnered further criticism for its extra features. Notably, it allows internet access, allowing (e.g.) drivers to access social media whilst on the road. Drivers will be able to, for instance, check Twitter and even retweet. Surely this is distracting? Isn’t giving drivers the option to check Facebook and Twitter every few minutes unnecessary and potentially dangerous? A specialist on distracted driving, David Strayer, believes so. “Going on Facebook or sending a Tweet is engaging in a complex conversation that should absolutely not be done while driving,” Strayer’s most recent research concludes that “despite public belief to the contrary, hands-free, voice-controlled automobile infotainment systems can distract drivers”. SYNC wasn’t even the most distracting tech system; Mercedes’ and Chevrolet’s equivalents both registered greater distraction ratings following a series of tests.

Built-in tech systems aren’t the only problem. Google Glass has sparked debate again after a US driver was reprimanded for driving whilst wearing the device – again, hands-free but with major risk of driver distraction. It demands your attention, diverting it from the road, and its pop-up notifications may cause sudden distractions. In the UK, The Department of Transport decided to intervene before the device was released, banning its use whilst driving. This is a good example of policy-making to prevent a problem before it begins; this isn’t a situation in which the user should be free to use the technology at their discretion – common sense (like switching off notifications) only goes so far. This kind of technology should be governed in the same stringent manner as mobile phone use.

Another area of concern with regards to in-car technology is the potential for ‘remote hijacking’. As cars become ‘smarter’, with fantastic technological accessories like automatic parking and braking (Ford will launch the latter next year), they also become more vulnerable to being hacked and controlled. Chris Valasek, an expert on vehicle security, explains that these technologies controlling steering and braking are highly amenable to manipulation. Computer scientists, in attempts to broadcast these concerns, have demonstrated frightening powers over cars, like slamming on their brakes and controlling their steering, all from their laptops. Watch Valasek in action here:

Valasek’s hacking involved physical access, but it has already been shown that remote hacking is possible. Part of the reason is that technologies described above, such as SYNC, operate through external wireless systems like bluetooth and WiFi, which hackers can use as portals into the car’s computer network without physical access. The problem is, says Valasek, that, disturbingly, manufacturers aren’t taking note; “security seems like an afterthought”. The consequences of malicious car-hacking could potentially affect any driver, pedestrian or cyclist, and so it is imperative that security is not an afterthought, but a central consideration.

Although this all may sound pessimistic, car technologies do have great potential to really improve safety as well as convenience. But a key point to make with regards to policy-making, is that it’s exactly these kinds of technology which I have discussed which must prove themselves to be safe according to the precautionary principle – the burden of proof of safety lies with the car and technology manufacturers. The hack-and-control issue of car technology more of an uncertainty, as there have been no actual cases yet (the above were under test conditions), – but in this case we must remember that “lack of evidence of harm is not the same thing as evidence of lack of harm”: the threat cannot be ignored.

In both arenas, therefore, it would be prudent to have regulations dictating, for instance, a minimum level of computer security, and standardized testing on possibly-distracting in-built technologies, to be conducted by the manufacturers. For distraction, so far there are only voluntary guidelines, ‘that encourage automobile manufacturers to limit the distraction risk’ (similarly, the organiser of the aforementioned distraction study urges manufacturers…to reduce cognitive distraction’); and for hacking, Valasek wants to use ‘public pressure’ and ‘shame’ to compel manufacturers to step up security. But I think these are necessary additions to existing vehicle regulations. Who is deciding the limit to which these technologies can progress? Proper governance must be by law, rather than being left to the car and technology manufacturers themselves, to whom profit is an incentive to continually advance the technology, and – on the basis that these technologies are not ‘new’, so don’t require special testing or regulation (the ‘novelty trap’) – to potentially ignore its growing risks.

The Rise of Telemedicine

In this post I’ll discuss telemedicine, an emerging technology involved in healthcare delivery. Telemedicine is medicine delivered from afar, rather than physically, in person; for instance, via phone calls or video calls between doctor and patient, or with a specialist ‘sitting in’ on a primary care consultation via video-link. It also encompasses other aspects of remote care such as ‘remote patient monitoring’, in which medical data is collected from a patient and sent to labs to be tested, or a physician to be reviewed.

Source: blogs.cisco.com

Although telemedicine isn’t a particularly new technology (it started with a video-link doctor’s consultation in America from 1924!), it looks like it’s is due only now to become a widespread, typical element of clinical practice. The Economist suggests the ‘telemedicine revolution’ may be upon us. Certain stats support this; for instance, Deloitte estimates that 12.5% of 600 million North American GP appointments this year will be conducted via telemedicine; indeed, up to 50% of these appointments are within the remit of telemedicine. Meanwhile, Google is trialling a service in which (amongst other things) patients can search a medical term and be connected via video to a doctor online (here is the website). So, is this ‘revolution’ to be welcomed?

Telemedicine certainly brings enormous benefits. Most importantly, it negates the influence of geography on healthcare. Geography is a major factor limiting patients’ access to healthcare; living in a rural/remote area means you likely have access to fewer healthcare resources, and must travel greater distances to reach them. Note that the 20% of rurally-dwelling Americans are served by at most 10% of doctors, which represents a significant imbalance. Having access to a specialist hundreds of miles away, via video-link, is a highly successful and necessary technological fix. This could also apply to patients close to a doctor, but unable to reach them (for instance, epileptic patients who cannot drive). These major barriers aside, telemedicine also offers a helping hand in terms of its sheer convenience, to those with busy schedules for instance.

One concern with telemedicine (the first which came to my mind) is the lack of physical communication. Building a rapport with a patient may require not just speech, but sometimes more subtle things like body language and physical contact; even a video-call consultation ‘radically’ changes the consultation environment, so some of these elements may be diminished or lost. What if telemedicine were to become more ubiquitous, and be used typically instead of a visit to the local GP? In this case, some patients may come to see this as a problem. In the US, the Federation of State Medical boards advises that even an initial physical encounter between doctor and patient isn’t necessary; some patients might feel differently, so choice is important.

Another area of concern is privacy and security. This paper sums up the issue thus:

Because of the unique combination of patient data, video imaging, and electronic clinical information that is generated between two distant sites…privacy concerns…may be magnified within the telemedicine arena”.

For instance, it claims that ‘most telemedicine encounters are recorded’ in full; who will have access to these videos, with the doctor’s knowledge and without? You know a physical consultation is private – how can you know who is watching your recorded consultation? Similarly, who could gain access to remotely-collected data? The manufacturer of the recording device? This piece highlights the risk of data transfer between a physician’s and a patient’s computer; whilst the former should by law be thoroughly protected, the latter may not be, and so could be a target for hackers and malicious software. With telemedicine’s potential for such widespread usage (after all, everybody is a patient), we must consider these kinds of risks; specifically, there must be measures in telemedicine software to prevent, for instance, data theft, and regulations to ensure these standards of protection are met universally.

What can we conclude from all this? When it comes to bridging physical and metaphorical distances, big or small, telemedicine is a fantastic technological fix. But it may be the case that in some years, a video-consultation with your local GP may be not unusual, and perhaps even expected. Over-embracing this technology may lead to ‘entrapment’ as Walker describes, where social commitments (such as laying down necessary infrastructure, forming legal contracts etc.) lead to a form of lock-in, in which telemedicine becomes the norm, which may not be to everyone’s taste. Opinions will differ, but it’s difficult to ascertain any general consensus: this review points out that, whilst most of the literature suggests almost universally high satisfaction, these studies suffer from ‘serious methodological weaknesses’ (like a lack of standardisation in defining ‘satisfaction’) so must be treated with scepticism, rather than as an impetus for large-scale deployment. On this score, Mort et al. worry that the ‘overwhelming optimism’ with which policymakers present telemedicine overrides the ambiguity of clinical trials, and combined with the ‘modernization agenda’, is generating pressure for roll-out rather than continued R&D. They argue that studies neglect how the technologies are used in ‘day-to-day practice’, which is no basis for proper decision-making.

Indeed, who is deciding when and where telemedicine is used? Whilst governments should equip medical practices (especially remote ones) with telemedicine capabilities, it shouldn’t be at the discretion of just any clinic to choose when to use telemedicine (which they may do on the basis that it is sufficient for the needs of a particular case, whilst being more convenient and cheaper), especially given the uncertainty I mentioned (in terms of effectiveness and patient satisfaction, as well as safety and cost-effectiveness). As a patient I’d probably prefer to see my GP in person, at least in certain cases, and I imagine as a doctor a key advantage of physical consultations is the ability to conveniently perform a physical examination. So whilst telemedicine can be hugely important, very convenient, and in many cases perfectly adequate, I don’t think it should become the standard means of conducting clinical medicine – it is important that patients (and clinicians) have choice.

Exploring the Brain

“The human brain, a 3-pound mass of interwoven nerve cells… is one of the most magnificent–and mysterious–wonders of creation… It continues to intrigue scientists and layman alike”.

President George H.W. Bush, 1990

And thus the ‘Decade of the Brain’ was proclaimed in the nineties. But this period of intense study of the brain didn’t end after ten years; indeed, in this post I want to discuss neuroscience projects which have emerged recently, in this decade, and consider the question of their governance.

In April 2013, President Obama announced the ‘BRAIN initiative’ (‘Brain Research through Advancing Innovative Neurotechnologies’), a long-term scheme to develop new technologies which will allow scientists to achieve a profound understanding of neuroscience – particularly the ‘Brain Activity Map’, an immeasurably complex map of neuronal networks.

Source: neuroblog.stanford.edu

key goal  the scientists hope to achieve thereby is the understanding and eventual treatment of a wide array of neurological and mental diseases, from Alzheimer’s disease to depression. Such a huge new project immediately poses the question – what is at stake? And this is indeed one area that has drawn criticism, with some arguing that individual disease projects may lose out; that it would be a more prudent use of limited resources (particularly financial) to focus on the diseases themselves, rather than studying the brain and hoping for a subsequent understanding of those diseases. Then again, perhaps seeking a complete understanding of neurophysiology and pathology (our knowledge of the brain is far from complete) will ultimately bring more benefit. So who decides where the money goes? The government decides – specifically the NIH (the US federal medical research body, which runs the Initiative), which awards grants to projects it deems worthy; external projects might feel they have been pushed aside so the NIH can focus on its own programme. Still, there may be much to be gained from large-scale funding for an innovative method of research.

Elsewhere, some have questioned whether the project is too vague to be viable. The neuroscientist Donald Stein criticises its ‘open-ended’ approach, with ‘no defined goals or endpoints’, and questions the validity and usefulness of brain mapping. Thus, to some, the Initiative is simply too much of a mystery in terms of what benefit it will bring, and how it will be achieved; some scientists are unhappy with this uncertainty, when there are more pressing issues in plain sight. Who is right here? One way to think of it is to use Sarewitz’s ‘technological fix’ model:  if we take the problem to be disease, and an incomplete understanding of their neurophysiology (often in neural diseases the root of the problem), does the Initiative satisfy the criteria? Firstly, it seeks to directly improve this sketchy understanding, so ‘embodies the essence of what needs to be done’, though some (like Stein) feel it lacks the ‘clear, technically feasible goals’ which this is predicated on; secondly, its results can be assessed in terms of, say, results of clinical trials of any treatments it yields – it’s clear when we have a successful treatment; thirdly, the Initiative will build on the existing core of neuroscience knowledge. Overall, from this it appears that the Initiative is a promising venture, though whether its methods and goals are clear and sensible (as is required for successful research) is debated.

While reading about the Initiative, I repeatedly came across another huge neuroscience project, which has garnered a much more direct and widespread criticism. The Human Brain Project (HBP) (the Initiative’s European ‘equivalent’), is a plan to build an artificial simulation of the brain, with ultimately similar ambitions to the latter. An open letter from hundreds of senior neuroscientists was sent to the European Commission (which funds the HBP), citing problems such as a ‘lack of flexibility and openness’. One key problem is that the HBP’s funding scheme entails no accountability, which is thus ‘likely to lead to corruption’. This requires proper regulation – vested financial interests, for example, must not be allowed to dictate who makes the decisions – and who is affected – in schemes with potential for such widespread implications. “They’ve gotten rid of anyone who objected to anything that they wanted to do”, asserts another signatory. This again suggests poor governance; decision-making should not be in the form of a dictatorship, in which only a high-level core decide how the research progresses.

Furthermore, we should also consider the ethical concerns posed by such projects. Nature worries that the new technology spawned by the Initiative might allow prediction of inevitable, yet untreatable, neurodegenerative diseases; who could gain access to this information? (This is reminiscent of a major controversy surrounding the Human Genome Project). One bioethicist also pointed to ethically-contentious issues such as transhumanism (transforming humans through techniques such as cognitive enhancement), whilst Kelly Bulkeley questions the diversity of people whose brains will be studied – we must always consider who wins and who loses, and if research is focused on too narrow a group of people, other groups may not reap the same benefits. President Obama, in a display of good governance, is aiming to deal with such issues from the outset through the Bioethics Commission, to reign in the technology according to ethical boundaries, and help to guide it to prevent those potential problems like unequal benefit.

This is a big field: other brain mapping projects exist, and more may follow! Schemes like the Initiative should be applauded for their noble aims and fantastic potential to alleviate disease – but of course, policy has an enormous role to play. Proper governance will help to prevent problems like corruption and ethical contention, to direct the research towards suitable goals, and ultimately, to achieve that great potential.

“As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears”.

President Barack Obama, 2013 (link)

Hopefully one day, thanks to projects like the BRAIN Initiative, those three pounds will be a mystery no more.