Re:AI and will it make our lives safer

Started by Nalaar, November 06, 2020, 12:11:36 PM

« previous - next »

0 Members and 1 Guest are viewing this topic.

Javert

Quote from: johnofgwent on November 07, 2020, 01:38:38 PMThe Airbus 300 series has an autoland system. I actually had a tiny bit to do with it. I had the misfortune to experience the finished product. It is the roughest puke inducing piece of shit this side of the vomit comet that is the SRN4 hovercraft on a BAD day.

I have explained my success at the Jubilee Line in here already.

I could go on...

In short, while I have no doubt the things you adulate will come to pass, I share Toots' view, but for very different reasons.

These things are the product of imperfect mortal man, and without exception they share, indeed inherit, those imperfections.

I have absolutely no doubt whatsoever that within five years of this technology being allowed on the road, there will be an incident of M4 J13/J14 proportions. Hundreds will die and CEOs will appear on TV sorrowfully stating there was no way they could have known.

All fine, but, these arguments seem pretty similar to me to the same arguments that stopped most or all human technological progress of any kind for several hundred years during the dark ages.

To be clear, if those vehicles are demonstrably more dangerous than having a human driver, then obviously they should not be allowed on the road.  I'm sure we agree on that.  What we seem to be disagreeing on is whether they should be allowed on the road if they are demonstrably safer than a human driver.

As regards the A300 aircraft, I have no idea but certainly I've been on quite a few flights where auto land was used and it worked fine - in fact, several times when I had an landing so smooth that you hardly even knew the wheels had touched the ground, it turned out to be an auto land.  You could be right about the A300 but it's a pretty old plane.

Also, no doubt this will be seen as another immoral comment, but even if there is a big accident that kills hundreds of people, if the total people killed on the road in that year is less than before, it's still on average safer.

In a similar vein, round the world and specifically in the USA, there are sometimes airliner accidents where several hundred people are killed at the same time.  However, if airliners were banned and all those people took road trips to their destination, even more people would be killed annually - it just wouldn't make the news as their deaths would be spread over the year and not all at the same time in a fireball.

T00ts

Quote from: johnofgwent on November 07, 2020, 01:38:38 PM
I get this angle all too well.

If there is a common thread running through my career, it is that I break things, revealing their failings to disbelievers.

At the age of four, I broke Tonka toys.

Throughout my career, the highlights are those points where I have listened to the optimistic rantings of youthful enthusiasts who have waxed lyrical at the capabilities of the systems they have built and how much of an improvement they are.

The Airbus 300 series has an autoland system. I actually had a tiny bit to do with it. I had the misfortune to experience the finished product. It is the roughest puke inducing piece of shit this side of the vomit comet that is the SRN4 hovercraft on a BAD day.

I have explained my success at the Jubilee Line in here already.

I could go on...

In short, while I have no doubt the things you adulate will come to pass, I share Toots' view, but for very different reasons.

These things are the product of imperfect mortal man, and without exception they share, indeed inherit, those imperfections.

I have absolutely no doubt whatsoever that within five years of this technology being allowed on the road, there will be an incident of M4 J13/J14 proportions. Hundreds will die and CEOs will appear on TV sorrowfully stating there was no way they could have known.

They are bullshitting.

I will most likely be dead by the time this event comes to pass.

But I guarantee you it will.

You say from different reasons and yet I totally concur with your view that it is the fault of mankind that they cannot see the imperfections in their hopes when it comes to tech and are only too ready to sacrifice individuals in their search for improvement and as an inevitable means to an end. It is one thing for those who flew the first planes etc to risk their own lives in search for progress but for me that pales against the apparent intentions of this where lives are to be disposed as necessary and at random.

For me this translates as the trickle of untruths that feed those not prepared to acquiesce to a greater power. I know you have lost that belief so I won't dwell on it but we reap what we sow is something I think we should heed much more in life.

johnofgwent

Quote from: Javert on November 07, 2020, 11:16:51 AM
No but you have posted that AI cars should not be allowed.

I am pointing out that pretty much everyone who actually has serious expertise in this area, knows full well that having fully AI driven vehicles on the roads in the future will reduce road deaths overall.

Therefore by implication, you are arguing that technology which will reduce road deaths should not be allowed.

Now it may be that your argument is based on the belief that the technology won't work, and that it will go wrong so often, that road deaths will increase.  I don't necessarily agree there either because I would expect that it will be very highly regulated and tested.

Arguably the bigger challenge is to navigate the medium term future where there would be a mix of human drivers and AI driven vehicles on the roads, since it would be even much safer if human controlled vehicles were banned.

I am talking about probably 20 years from now, not something that will come next week.

I get this angle all too well.

If there is a common thread running through my career, it is that I break things, revealing their failings to disbelievers.

At the age of four, I broke Tonka toys.

Throughout my career, the highlights are those points where I have listened to the optimistic rantings of youthful enthusiasts who have waxed lyrical at the capabilities of the systems they have built and how much of an improvement they are.

The Airbus 300 series has an autoland system. I actually had a tiny bit to do with it. I had the misfortune to experience the finished product. It is the roughest puke inducing piece of shit this side of the vomit comet that is the SRN4 hovercraft on a BAD day.

I have explained my success at the Jubilee Line in here already.

I could go on...

In short, while I have no doubt the things you adulate will come to pass, I share Toots' view, but for very different reasons.

These things are the product of imperfect mortal man, and without exception they share, indeed inherit, those imperfections.

I have absolutely no doubt whatsoever that within five years of this technology being allowed on the road, there will be an incident of M4 J13/J14 proportions. Hundreds will die and CEOs will appear on TV sorrowfully stating there was no way they could have known.

They are bullshitting.

I will most likely be dead by the time this event comes to pass.

But I guarantee you it will.
<t>In matters of taxation, Lord Clyde\'s summing up in the 1929 case Inland Revenue v Ayrshire Pullman Services is worth a glance.</t>

T00ts

Quote from: Javert on November 07, 2020, 11:30:09 AM
Hmmm well I also understand where you are coming from.

The thing is though, I've already made the point that such decisions are already being made today by humans (often assisted by data that can only be collected by technology and by statistical information).

Also I am not saying that the decision should be made on their usefulness to society, although that could be one method - that's the whole ethical debate around it.  As I mentioned earlier, it's also a valid argument to say that in a situation like that, the car should take a random decision, because there is no way it could know everything about the people involved.  That's the exact debate that goes on, but from a moral standpoint I don't see really see it differently to the decisions already being taken today.

In other words if it's sinister, to me it seems like there are lot of sinister decisions already being taken daily today - they are just not visible to most of us.

Let's say that an out of control car is faced with having to kill one or other person.  The driver decides to run over the person who looks much older whilst the other one is a child, on the basis that the child has more years ahead of them.

Later on, it turns out that the very old person was a PHD scientist on the verge of discovering a revolutionary new treatment for a type of cancer, whilst meanwhile the child later on turns out to be a psychopathic murderer. 

Why is that a different moral situation if the car was driven by an AI compared to a human?  In both cases with hindsight you could argue the decision was wrong, but there was no way to foresee it and both human driver and AI driver took what many people would consider to be the reasonable decision.

I despair - once again your argument hinges on a secular view which leads us to an acceptance of all 'progress' without looking at the wider horizon. I try so hard to see the wider implications of what we are led to accept without question. This is just one where the well used ploy of promises that overall deaths will decrease, that AI will be faster, that humans can make wrong choices while of course the AI will be programmed and therefore the unarguably right outcome will always happen is so delectable, so very persuasive. That argument that we have done things in the past even hidden from view so why fight it now?

We are led by the nose by falsehoods, by promises of great riches, by the beguiling offerings that we know best, that we are the ultimate intelligence and that everything rests on us. We can save the world. It breaks my heart that so many simply won't listen.

I am prompted to come back to what I feel is the only argument in this. Thou shalt not kill. To programme a machine to make a decision on who to kill and even consider putting a machine without any sort of over-ride into a situation where that sort of programming is needed is breaking that commandment. It doesn't matter what arguments are put forward really there is only truly one law and we ignore it at our peril.

Javert

Quote from: T00ts on November 07, 2020, 10:46:51 AM
I understand exactly what you are arguing and in principal I have little problem with AI driven cars per se. In a secular world the arguments come thick and fast as to how progress will save this or improve that and are so 'plausible' but then that is the way they are designed. Yet the scenario we are focussing on is not the run of the mill nose to tail travel resembling a train, down the motorway, but the situation where a choice is being determined between collision with one set of people or another. Someone somewhere is then having to make a decision, and that is where I seriously fall out with the premise that a judgement can actually be made with any reference to fairness.

Let's face the reality of what we are really saying. We are not arguing over AI's ability measured against the human, we are arguing that one person is not so fit to live as another. My question is based on who do we actually believe can make that decision? Is it you? Is it me? No. So how can we justify shifting that to someone else - or even a panel of people - so that AI is told 'no matter what, you will act in this way given these circumstances'. It's inhuman to the point of being sinister.

I have said many times on this Forum that it is not our role to judge another. I will reiterate here and now that choosing and judging which people should die based on their usefulness or otherwise to society is not our decision to make. Thou shalt not kill has echoed throughout the ages. It is enshrined in our law as well as throughout the world. We go against that at our peril. The really clever human being will make the circumstances that are referred to impossible. Just because we can do something doesn't mean we should.

Hmmm well I also understand where you are coming from.

The thing is though, I've already made the point that such decisions are already being made today by humans (often assisted by data that can only be collected by technology and by statistical information).

Also I am not saying that the decision should be made on their usefulness to society, although that could be one method - that's the whole ethical debate around it.  As I mentioned earlier, it's also a valid argument to say that in a situation like that, the car should take a random decision, because there is no way it could know everything about the people involved.  That's the exact debate that goes on, but from a moral standpoint I don't see really see it differently to the decisions already being taken today.

In other words if it's sinister, to me it seems like there are lot of sinister decisions already being taken daily today - they are just not visible to most of us.

Let's say that an out of control car is faced with having to kill one or other person.  The driver decides to run over the person who looks much older whilst the other one is a child, on the basis that the child has more years ahead of them.

Later on, it turns out that the very old person was a PHD scientist on the verge of discovering a revolutionary new treatment for a type of cancer, whilst meanwhile the child later on turns out to be a psychopathic murderer. 

Why is that a different moral situation if the car was driven by an AI compared to a human?  In both cases with hindsight you could argue the decision was wrong, but there was no way to foresee it and both human driver and AI driver took what many people would consider to be the reasonable decision.


Javert

Quote from: Barry on November 07, 2020, 10:38:44 AM
I have not posted that anywhere, Javert. Please stop making it up.

No but you have posted that AI cars should not be allowed.

I am pointing out that pretty much everyone who actually has serious expertise in this area, knows full well that having fully AI driven vehicles on the roads in the future will reduce road deaths overall.

Therefore by implication, you are arguing that technology which will reduce road deaths should not be allowed.

Now it may be that your argument is based on the belief that the technology won't work, and that it will go wrong so often, that road deaths will increase.  I don't necessarily agree there either because I would expect that it will be very highly regulated and tested.

Arguably the bigger challenge is to navigate the medium term future where there would be a mix of human drivers and AI driven vehicles on the roads, since it would be even much safer if human controlled vehicles were banned.

I am talking about probably 20 years from now, not something that will come next week.

T00ts

Quote from: Javert on November 07, 2020, 10:16:35 AM
If you want to categorise it as simplistic that's fine - what I am questioning is whether a pre-programmed machine would make a worse decision in that instantaneous moment than a human, especially given that it would only be making the decision driven by the programming of a human. 

From what I can see, you can no more ensure that any human being will make the "right" decision or the same decision that you would have made, than you can a machine. 

I also take issue with the "infinite capability of the human mind".  The human brain is arguably the single most complex thing in the known universe.  However, it is not specifically designed to react faster than a machine in any given scenario - it's been categorically proven that an AI vehicle can react faster than any human to a known and programmed situation.

What I mean by that is that if someone walks out into the road in front of the car, depending how far in front of you they are, you may or may not be able to stop before you hit them.

There is a small margin of distance where only an AI driven vehicle would be able to stop the car in time, but any human would fail, because, the AI simply has faster reaction times - the time between the person appearing on the detectors, and the AI taking action, is almost instantaneous, probably nanoseconds, whereas a human would have to see the person, react, move their foot to the brake pedal, depress the brake pedal the exact optimum amount - each of those has a human delay which is more in thousandths of a second than nanoseconds.

This scenario will actually save a lot more lives, than some of the extreme rare edge cases we have got hung up on.  As Nalaar pointed out above, no serious expert in vehicle AI believes that road deaths will increase if all cars are AI driven in the long term.  Road deaths in general will go down.

Now Barry, and by implication Toots, are efectively arguing that it would be better not to have AI vehicles and let much more road deaths continue to occur, because it would be (religiously?) immoral for a computer to make an edge case decision in an emergency situation.

I'm challenging that argument.

I understand exactly what you are arguing and in principal I have little problem with AI driven cars per se. In a secular world the arguments come thick and fast as to how progress will save this or improve that and are so 'plausible' but then that is the way they are designed. Yet the scenario we are focussing on is not the run of the mill nose to tail travel resembling a train, down the motorway, but the situation where a choice is being determined between collision with one set of people or another. Someone somewhere is then having to make a decision, and that is where I seriously fall out with the premise that a judgement can actually be made with any reference to fairness.

Let's face the reality of what we are really saying. We are not arguing over AI's ability measured against the human, we are arguing that one person is not so fit to live as another. My question is based on who do we actually believe can make that decision? Is it you? Is it me? No. So how can we justify shifting that to someone else - or even a panel of people - so that AI is told 'no matter what, you will act in this way given these circumstances'. It's inhuman to the point of being sinister.

I have said many times on this Forum that it is not our role to judge another. I will reiterate here and now that choosing and judging which people should die based on their usefulness or otherwise to society is not our decision to make. Thou shalt not kill has echoed throughout the ages. It is enshrined in our law as well as throughout the world. We go against that at our peril. The really clever human being will make the circumstances that are referred to impossible. Just because we can do something doesn't mean we should.

Barry

Quote from: Javert on November 07, 2020, 10:16:35 AMNow Barry, and by implication Toots, are effectively arguing that it would be better not to have AI vehicles and let much more road deaths continue to occur, because it would be (religiously?) immoral for a computer to make an edge case decision in an emergency situation.
I have not posted that anywhere, Javert. Please stop making it up.
† The end is nigh †

Javert

Quote from: T00ts on November 06, 2020, 06:06:16 PM
Can't you see just how simplistic your view is? You are trying to equate a programmed machine to the infinite capability of the human mind. Worse than that you are trying to sell me the notion that it would be superior. Your average human would do their level best not to hit anyone often with little thought to their own safety. If the suggestion is that an AI driven car cannot be over-ridden by the human within then I am sorry for the future. You notion that 'it is still the same decision' I find quite frightening.

If you want to categorise it as simplistic that's fine - what I am questioning is whether a pre-programmed machine would make a worse decision in that instantaneous moment than a human, especially given that it would only be making the decision driven by the programming of a human. 

From what I can see, you can no more ensure that any human being will make the "right" decision or the same decision that you would have made, than you can a machine. 

I also take issue with the "infinite capability of the human mind".  The human brain is arguably the single most complex thing in the known universe.  However, it is not specifically designed to react faster than a machine in any given scenario - it's been categorically proven that an AI vehicle can react faster than any human to a known and programmed situation.

What I mean by that is that if someone walks out into the road in front of the car, depending how far in front of you they are, you may or may not be able to stop before you hit them.

There is a small margin of distance where only an AI driven vehicle would be able to stop the car in time, but any human would fail, because, the AI simply has faster reaction times - the time between the person appearing on the detectors, and the AI taking action, is almost instantaneous, probably nanoseconds, whereas a human would have to see the person, react, move their foot to the brake pedal, depress the brake pedal the exact optimum amount - each of those has a human delay which is more in thousandths of a second than nanoseconds.

This scenario will actually save a lot more lives, than some of the extreme rare edge cases we have got hung up on.  As Nalaar pointed out above, no serious expert in vehicle AI believes that road deaths will increase if all cars are AI driven in the long term.  Road deaths in general will go down.

Now Barry, and by implication Toots, are efectively arguing that it would be better not to have AI vehicles and let much more road deaths continue to occur, because it would be (religiously?) immoral for a computer to make an edge case decision in an emergency situation.

I'm challenging that argument.

T00ts

Quote from: Barry on November 06, 2020, 06:19:20 PM
At the hospital I used to work in was a pharmacy which I spent many hours at.
They had a robot, state of the art, which stored deliveries of drugs by barcodes, then picked them out to be delivered to a dispenser for a prescription.
Although it was state of the art and regularly serviced, on several occasions there was an awful log jam of "traffic", which, had they been cars and not packets of tablets, there would have been multiple fatalities. Go figure.

What really worries me is the almost unthinking acceptance and trust. I know there are accusations that too many have been taught to pass exams rather than think but I find it really dismaying.

Barry

At the hospital I used to work in was a pharmacy which I spent many hours at.
They had a robot, state of the art, which stored deliveries of drugs by barcodes, then picked them out to be delivered to a dispenser for a prescription.
Although it was state of the art and regularly serviced, on several occasions there was an awful log jam of "traffic", which, had they been cars and not packets of tablets, there would have been multiple fatalities. Go figure.
† The end is nigh †

Barry

Quote from: Javert on November 06, 2020, 05:38:16 PM
I thought someone might bring that up - there have indeed been some outlier cases where automation made plane crashes more likely, but, even in the case of the 737 Max, it was actually possible to avoid both those crashes if the pilot's had been properly trained and informed about how to deal with the situation that arose.  The system they put in was not acceptable and should not have been certified in that way, but, both accidents were theoretically avoidable with specific pilot actions.

That's not to say that it was the pilot's fault because the pilots had not been trained in how to deal with those situations, and Boeing and the FAA and the airlines  actually deliberately chose not to train them to save money, which was clearly an unethical decision.

None of this changes the fact that the vast majority of aircraft accidents are caused by pilot error, and the vast majority of road casualties are caused by driver error.
Now, I'm not trying to be pedantic here, as we all make typos and errors. But computer coding is a bit sniffy and if there is a minor break in the codes and sequences it can cause a program to crash. If your post were a program it would crash, as one of the apostrophes is wrong.
We need to be able to trust computers far more than we can at present.
There are no 100% uptime servers. No 100% reliable Internet connections.
So we need to forget AI cars. Why do we need them? People can be trained to drive.
† The end is nigh †

T00ts

Quote from: Javert on November 06, 2020, 05:46:58 PM
Far be it from me to reinvigorate another thread, but  the problem is that it's been demonstrated pretty conclusively that this approach results in unconscious bias - it could be argued that a fully transparent process is morally better than one where the surgeon could decide to make the decision based on any random "gut feel" criteria.

It's also worth keeping in mind that we are generally here talking about very rare situations where the AI car needs to know what to do - this is not something that's going to happen all the time.

Let's say you personally were driving the car and the brakes failed and there were two people in your path and you couldn't avoid running one of them over.  What would you do?  You either have to make an instantaneous moral judgement of which live should be saved, or you have to just take no action (keep going straight) and tough luck on the one who is in your prior path.

If such judgements can be made, what is the difference between the instant decision made by the driver, and the decision predetermined by the AI ethics committee - it's still the same decision?

Can't you see just how simplistic your view is? You are trying to equate a programmed machine to the infinite capability of the human mind. Worse than that you are trying to sell me the notion that it would be superior. Your average human would do their level best not to hit anyone often with little thought to their own safety. If the suggestion is that an AI driven car cannot be over-ridden by the human within then I am sorry for the future. You notion that 'it is still the same decision' I find quite frightening.

Javert

Quote from: T00ts on November 06, 2020, 05:35:29 PM
Be it a panel of surgeons all of whom have experience, who look at the individual circumstances before them and have full use of all their skills and instincts etc or one, it is a very far call from programming a machine to always make the decision to kill one man while saving the apparently more acceptable member of society. This is immorality at its worst and leads to discrimination of a scale I can hardly bear to think about.  It is not killing by accident or an error by a fallible human being it is a programmed decision. However you might couch it it is essentially murder.

Far be it from me to reinvigorate another thread, but  the problem is that it's been demonstrated pretty conclusively that this approach results in unconscious bias - it could be argued that a fully transparent process is morally better than one where the surgeon could decide to make the decision based on any random "gut feel" criteria.

It's also worth keeping in mind that we are generally here talking about very rare situations where the AI car needs to know what to do - this is not something that's going to happen all the time.

Let's say you personally were driving the car and the brakes failed and there were two people in your path and you couldn't avoid running one of them over.  What would you do?  You either have to make an instantaneous moral judgement of which live should be saved, or you have to just take no action (keep going straight) and tough luck on the one who is in your prior path.

If such judgements can be made, what is the difference between the instant decision made by the driver, and the decision predetermined by the AI ethics committee - it's still the same decision?

Javert

Quote from: Barry on November 06, 2020, 04:43:24 PMYep, never let on the road in the first place. And as for aircraft, how many people did the Boeing 737 Max have to kill?

I thought someone might bring that up - there have indeed been some outlier cases where automation made plane crashes more likely, but, even in the case of the 737 Max, it was actually possible to avoid both those crashes if the pilot's had been properly trained and informed about how to deal with the situation that arose.  The system they put in was not acceptable and should not have been certified in that way, but, both accidents were theoretically avoidable with specific pilot actions.

That's not to say that it was the pilot's fault because the pilots had not been trained in how to deal with those situations, and Boeing and the FAA and the airlines  actually deliberately chose not to train them to save money, which was clearly an unethical decision.

None of this changes the fact that the vast majority of aircraft accidents are caused by pilot error, and the vast majority of road casualties are caused by driver error.