亚博网站信誉有保障的_无人驾驶汽车如何处置险情2021-02-07 00:35

本文摘要:Today I have been both murderous and merciful. 今天,我既残暴又仁慈。I have deliberately mown down pensioners and a pack of dogs. 我蓄意杀掉了发给养老金者和几条狗。


Today I have been both murderous and merciful. 今天,我既残暴又仁慈。I have deliberately mown down pensioners and a pack of dogs. 我蓄意杀掉了发给养老金者和几条狗。

I have ploughed into the homeless, slain a couple of athletes and run over the obese. 我撞到了无家可归者,杀掉了两名运动员,碾过了肥胖者。But I have always tried to save the children.但是,我一直希望救回孩子。As I finish my session on the Moral Machine — a public experiment being run by the Massachusetts Institute of Technology — I learn that my moral outlook is not universally shared. 我在道德机器(Moral Machine)——麻省理工学院(MIT)运营的一项公开发表实验——上已完成测试后找到,我的道德观跟很多人不一样。Some argue that aggregating public opinions on ethical dilemmas is an effective way to endow intelligent machines, such as driverless cars, with limited moral reasoning capacity. 有些人坚称,在道德困境上把公众意见汇聚到一起,是向无人驾驶汽车等智能机器彰显受限道德推理小说能力的有效地手段。

Yet after my experience, I am not convinced that crowdsourcing is the best way to develop what is essentially the ethics of killing people. 然而,在测试之后,我不坚信众包是构成残暴道德(本质上就是这么回事)的最佳途径。The question is not purely academic: Tesla is being sued in China over the death of a driver of a car equipped with its semi-autonomous autopilot. 这个问题并不全然是学术层面的:一辆配有半自动式Autopilot的特斯拉(Tesla)汽车的驾车者丧生,造成该公司在中国被控告。

Tesla denies the technology was at fault.特斯拉坚称那起事故的罪过在于该项技术。Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle. 任何人只要有台电脑,利用咖啡时间就可以参与麻省理工学院的大众实验。The vehicle is packed with passengers, and heading towards pedestrians. 该实验想象一辆仅有自动驾驶汽车的刹车失灵。

这辆车载满了乘客,正朝行人进过去。The experiment depicts 13 variations of the trolley problem — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.实验得出了这一无轨电车难题的13个版本。

这是一个经典的道德难题,必须要求谁将杀于一辆失控电车的车轮之下。In MIT’s reformulation, the runaway is a self-driving car that can keep to its path or swerve; both mean death and destruction. 在麻省理工学院的新的设计中,失控的是一辆自动驾驶汽车,它既可以按原本路线行经,也可以急转弯;两种情形都会导致丧生和毁坏。

The choice can be between passengers and pedestrians, or two sets of pedestrians. 被中选对象可以是乘客或行人,或者两组行人。Calculating who should perish involves pitting more lives against less, young against old, professionals against the homeless, pregnant women against athletes, humans against pets.计算出来谁不应送命,必须在较多生命和较较少生命之间、年轻人和老年人之间、专业人士和无家可归者之间、分娩女性和运动员之间,以及人类和宠物之间作出决择。At heart, the trolley problem is about deciding who lives, who dies — the kind of judgment that truly autonomous vehicles may eventually make. 电车难题的核心是要求谁生子、谁杀——这正是确实自动驾驶的汽车最后也许要作出的那种辨别。My preferences are revealed afterwards: I mostly save children and sacrifice pets. 我的偏爱在实验后被透露出来:基本上,我会救回孩子,壮烈牺牲宠物。

Pedestrians who are not jaywalking are spared and passengers expended. 没乱穿马路的行人以求幸免于难,而乘客被壮烈牺牲了。It is obvious: by choosing to climb into a driverless car, they should shoulder the burden of risk. 很显著:自由选择上一辆无人驾驶汽车的人,应该承担一部分风险。As for my aversion to swerving, should caution not dictate that driverless cars are generally programmed to follow the road?至于我不愿急转弯,怎么会慎重没意味著无人驾驶汽车的程序指令一般来说是沿道路行经吗?It is illuminating — until you see how your preferences stack up against everyone else. 这很有灵感意义——直到你看见自己的偏爱跟其他所有人有多么有所不同。

In the business of life-saving, I fall short — especially when it comes to protecting car occupants. 我在救命这件事上做到得过于好——特别是在是在维护汽车乘员方面。Upholding the law and not swerving seem more important to me than to others; the social status of my intended victims much less so.比起其他事项,守法和防止急转弯或许对我更加最重要一些;我自由选择的受害人的社会地位对我几乎不最重要。


We could argue over the technical aspects of dishing out death judiciously. 我们有可能对于明智而谨慎地发给丧生的技术方面争论不休。For example, if we are to condemn car occupants, would we go ahead regardless of whether the passengers are children or criminals?例如,如果我们宣判汽车乘员判处死刑,那么无论乘客是孩子还是罪犯,我们都会照做不误吗?But to fret over such details would be pointless. 但是,为此类细节苦恼将是毫无意义的。If anything, this experiment demonstrates the extreme difficulty of reaching a consensus on the ethics of driverless cars. 如果说有任何进账的话,那就是这个实验证明,要在无人驾驶汽车的道德上达成协议共识是极为艰难的。

Similar surveys show that the utilitarian ideal of saving the greatest number of lives works pretty well for most people as long as they are not the roadkill.类似于调查表明,对大多数人而言,救出最多条命这个功利主义观念合情合理——只要他们自己不出车轮下遇难。I am pessimistic that we can simply pool our morality and subscribe to a norm — because, at least for me, the norm is not normal. 我对于只是把大家的道德子集到一起、然后遵从一个规范深感很乐观,因为,最少在我看来,这个规范不是长时间的。

This is the hurdle faced by makers of self-driving cars, which promise safer roads overall by reducing human error: who will buy a vehicle run on murderous algorithms they do not agree with, let alone a car programmed to sacrifice its occupants?这是自动驾驶汽车厂商面对的障碍。他们允诺通过增加人类罪过来提升整体道路安全性,但是谁不会出售一辆由他本人并不接纳的残暴算法操纵的汽车呢?更加别提程序原作壮烈牺牲车上乘客的汽车了。It is the idea of premeditated killing that is most troubling. 最令人不安的正是这种密谋残暴的设想。

That sensibility renders the death penalty widely unpalatable, and ensures abortion and euthanasia remain contentious areas of regulation. 那种敏感性让判处死刑广泛无法拒绝接受,并保证安乐死和安乐死仍是引起争议的监管领域。Most of us, though, grudgingly accept that accidents happen. 不过,我们大多数人咬牙拒绝接受事故有可能再次发生。Even with autonomous cars, there may be room for leaving some things to chance.即便是自动驾驶汽车,也许也应当留给让某些事情听天由命的空间。