Re:AI and will it make our lives safer

Started by Nalaar, November 06, 2020, 12:11:36 PM

« previous - next »

0 Members and 5 Guests are viewing this topic.

T00ts

Quote from: Javert on November 06, 2020, 04:20:39 PMWell based on the positions you are taking, it seems to me that you should be against this kind of automation on principle, but to explore this a bit more....

I don't understand your need to explore my reasoning.

Quote from: Javert on November 06, 2020, 04:20:39 PMYou mentioned earlier that if a professional surgeon takes the same decision to the best of their ability, that would be ok.  Is that a moral decision they are taking?

At what point does it become a moral decision by a human versus by a computer.  For example, if the surgeon uses a checklist that was created on a computer to help with the decision, is that an immoral machine assisted decision, versus if he/she just went with gut feel?

If the checklist becomes the final tool in a decision that's immoral to me. Where that checklist is a means of covering all discussion but is then over-ridden by the surgeon - that I can live with. I actually believe that human decisions are not just an intellectual process but are affected by much deeper instincts, emotions and knowledge that no machine could replicate, no matter how good the programming. Humans are so much more than what we acknowledge.


Quote from: Javert on November 06, 2020, 04:20:39 PMIf a traffic light turned green and let cars though, at the same time there was an out of control vehicle coming from the other way, has the person who programmed the timer on the traffic light made an immoral decision?  If the person who set that timer considered the possibility of people running the red light versus the efficiency of traffic flow and came to a compromise position on the timer, is that an immoral decision?

This is a poor example so I am not sure what you expect me to gain from it.
Quote from: Javert on November 06, 2020, 04:20:39 PMBy the way just for the record, as far as i know in the heart transplant scenario, it's actually not one surgeon who makes that decision - the hospital has an ethics committee who make the decisions jointly.  Does that mean the surgeon is attempting to absolve responsibility?

I would refer you to my previous comment. Be it a panel of surgeons all of whom have experience, who look at the individual circumstances before them and have full use of all their skills and instincts etc or one, it is a very far call from programming a machine to always make the decision to kill one man while saving the apparently more acceptable member of society. This is immorality at its worst and leads to discrimination of a scale I can hardly bear to think about.  It is not killing by accident or an error by a fallible human being it is a programmed decision. However you might couch it it is essentially murder.

johnofgwent

Quote from: patman post on November 06, 2020, 03:47:26 PM
DLR operates well. Admittedly it has fewer parameters to contend with autonomously than a road vehicle. But as little as 40 years ago DLR was futuristic. In ten or 20 years, self driving road vehicles will be available and probably safer that today's models. The latest stats for road deaths are about 1784 with 25,511 seriously injured in 2018.
The Institute of Advanced Mororists is in favour of current autonomous safety equipment, so it's only a short step to full autonomy...
https://www.fleetnews.co.uk/news/fleet-industry-news/2019/09/27/uk-road-casualty-statistics-labelled-dsigraceful
Er .. no
I had a hand in the radio modems in the trains on the jubilee line shich used the same tech as the DLR, and I know the guys who did the DLR ATP.  I remember the look on their faces when i proved their wonderful system could ignore the red lights at the sidings and go straight onto the main line causing another ladbroke grove. And two months latew quaer the next iteration of the as yet far from released software stopped that.
I have few qualms about the DLR ATP because it takes far more than that to derail a train which is already heavily controlled as ot where, and how fast, it can go.
I doubt i wil llive to see AI powering cars, in the same way that my father as a young man was bomarded with the view that his grandchildren would be driving cars that flew.  Not going to happen
<t>In matters of taxation, Lord Clyde\'s summing up in the 1929 case Inland Revenue v Ayrshire Pullman Services is worth a glance.</t>

johnofgwent

Quote from: Nalaar on November 06, 2020, 02:24:13 PM

Intentionally confusing a cars sensors that result in a crash sounds like the kind of action that will result in imprisonment.

You would have to find me first.

Right now, thanks to utter insanity on the part of certain electric vehicle manufacturers, android apps have access to secondary systems. I leave it to your imagination to consider how long it will take some enterprising criminal to work out how to mess with a car's primary system. And how you are going to track me down behind the sort of rogue countries and VPNs that most deployers of ransomware exploit.
<t>In matters of taxation, Lord Clyde\'s summing up in the 1929 case Inland Revenue v Ayrshire Pullman Services is worth a glance.</t>

johnofgwent

Quote from: papasmurf on November 06, 2020, 02:20:09 PM
Quite, plus GPS going offline.
Well, that is one of the known issues with the existing technology. If it loses GPS it requires the human driver who is not driving to start earning his pay packet. As Michael O Leary famously points out when denigrating his employees, a machine can fly this aircraft, but the pilots are not there to deal with when the machine can be in control, they are there for when the machine goes nuts or bits fall off.

I see typing this that Barry already raised the issue of the Boeing 737 Max.

Last September i had the rather fascinating pleasure of overflying about 200 of them. I was sitting in a 777 which my mate did part of the avionics for, coming in to land at SEATAC which meant overflying Boeing's main factory, and by god did they have a car park where the aircraft were lined up going nowhere. and that was long before the coronapox ...
<t>In matters of taxation, Lord Clyde\'s summing up in the 1929 case Inland Revenue v Ayrshire Pullman Services is worth a glance.</t>

Barry

Quote from: T00ts on November 06, 2020, 03:14:03 PMIn that case then there should be just one decision. The vehicle should be stopped.
Yep, never let on the road in the first place. And as for aircraft, how many people did the Boeing 737 Max have to kill?
† The end is nigh †

Javert

Quote from: T00ts on November 06, 2020, 03:29:55 PM
In which case we are still being asked which life is more important. That is the immorality of the whole concept. No wonder the programmers etc want to dilute the responsibility.

Well based on the positions you are taking, it seems to me that you should be against this kind of automation on principle, but to explore this a bit more....

You mentioned earlier that if a professional surgeon takes the same decision to the best of their ability, that would be ok.  Is that a moral decision they are taking?

At what point does it become a moral decision by a human versus by a computer.  For example, if the surgeon uses a checklist that was created on a computer to help with the decision, is that an immoral machine assisted decision, versus if he/she just went with gut feel?

If a traffic light turned green and let cars though, at the same time there was an out of control vehicle coming from the other way, has the person who programmed the timer on the traffic light made an immoral decision?  If the person who set that timer considered the possibility of people running the red light versus the efficiency of traffic flow and came to a compromise position on the timer, is that an immoral decision?

By the way just for the record, as far as i know in the heart transplant scenario, it's actually not one surgeon who makes that decision - the hospital has an ethics committee who make the decisions jointly.  Does that mean the surgeon is attempting to absolve responsibility?

patman post

Quote from: Barry on November 06, 2020, 02:57:14 PM
No. I prefer a human driver, but that driver given as much aid as possible in making decisions, such as proximity/speed alarms, ice alerts, so that it mitigates risk as far as possible.
We need a separate thread for this!
DLR operates well. Admittedly it has fewer parameters to contend with autonomously than a road vehicle. But as little as 40 years ago DLR was futuristic. In ten or 20 years, self driving road vehicles will be available and probably safer that today's models. The latest stats for road deaths are about 1784 with 25,511 seriously injured in 2018.
The Institute of Advanced Mororists is in favour of current autonomous safety equipment, so it's only a short step to full autonomy...
https://www.fleetnews.co.uk/news/fleet-industry-news/2019/09/27/uk-road-casualty-statistics-labelled-dsigraceful
On climate change — we're talking, we're beginning to act, but we're still not doing enough...

papasmurf

Quote from: T00ts on November 06, 2020, 03:29:55 PM
In which case we are still being asked which life is more important.

Important to who?  If it was choice between Bojo-The-Clown and a brain surgeon the brain surgeon gets saved.
Nemini parco qui vivit in orbe

T00ts

Quote from: Javert on November 06, 2020, 03:25:24 PM
Yes obviously, but I think that the scenarios Nalaar is raising is where the vehicle is unable to stop in time due to some kind of external failure or unforseeable event like someone runs into the road from both sides of the road at the same time or suchlike.  A computer can react and stop the vehicle much quicker than a human, but, it cannot defeat the laws of physics.

In which case we are still being asked which life is more important. That is the immorality of the whole concept. No wonder the programmers etc want to dilute the responsibility.

papasmurf

Quote from: Nalaar on November 06, 2020, 02:47:23 PM
Terrorists are also fond of manually driving cars into crowds of people.


Terrorist managed to take control of a fully armed military drone in the not so distant past.
Nemini parco qui vivit in orbe

Javert

Quote from: T00ts on November 06, 2020, 03:14:03 PMIn that case then there should be just one decision. The vehicle should be stopped.

Yes obviously, but I think that the scenarios Nalaar is raising is where the vehicle is unable to stop in time due to some kind of external failure or unforseeable event like someone runs into the road from both sides of the road at the same time or suchlike.  A computer can react and stop the vehicle much quicker than a human, but, it cannot defeat the laws of physics.

T00ts

Quote from: Javert on November 06, 2020, 03:04:15 PM
If you talk to the NTSB or AAIB they will tell you that over the decades the best way to prevent aircraft accidents generally is to remove the human pilot from as much of the flying as possible.

Of course there may be exceptions to that, but in most cases the accidents are caused by a combination of human errors that would have been avoided with correctly programmed automation.

But we are not talking about saving accidents. We are talking about who to hit and potentially kill should circumstances arise to make that a choice.

Quote from: Javert on November 06, 2020, 03:11:37 PM
Yes you are correct that the programming would be subject to the biases and opinions of the person who is designing the software.

As I understand it, that is exactly why they are trying to create an ethical framework around how these decisions should be taken - it can be characterised if you like as "absolving themselves of responsibility" but it can also be seen as they don't think it's the right of the software programmer or designer to make moral judgements, whilst also acknowledging that these are real scenarios that the car might encounter and it needs to be told what to do. 

I suppose it could be that the computer, if faced with any ethical dilemna, should take no avoiding action and continue as if there was no such issue, since a computer should not be making moral decisions.  Of course the counter argument here is that not doing anything is also a decision in itself that could result in a worse outcome.

In that case then there should be just one decision. The vehicle should be stopped.

Javert

Quote from: T00ts on November 06, 2020, 02:57:35 PMYes I can see why you would take my view as extreme. In the case of a heart transplant it is indeed a choice qualified people have to take and they hopefully reason as to the best course of action. I trust them to make the best choice according to their ability. I have no problem with that. What irks me is the prospect of a machine being programmed to make choices where there is no chance of making a reasoned choice. Whatever programmes one creates there is no way unless everyone is microchipped to prove their value in the sight of the scientist/programmer, that said machine can be expected to replace the human. Why would sensible people allow where this will go?

Yes you are correct that the programming would be subject to the biases and opinions of the person who is designing the software.

As I understand it, that is exactly why they are trying to create an ethical framework around how these decisions should be taken - it can be characterised if you like as "absolving themselves of responsibility" but it can also be seen as they don't think it's the right of the software programmer or designer to make moral judgements, whilst also acknowledging that these are real scenarios that the car might encounter and it needs to be told what to do. 

I suppose it could be that the computer, if faced with any ethical dilemna, should take no avoiding action and continue as if there was no such issue, since a computer should not be making moral decisions.  Of course the counter argument here is that not doing anything is also a decision in itself that could result in a worse outcome.

Javert

Quote from: Barry on November 06, 2020, 02:57:14 PM
No. I prefer a human driver, but that driver given as much aid as possible in making decisions, such as proximity/speed alarms, ice alerts, so that it mitigates risk as far as possible.
We need a separate thread for this!

If you talk to the NTSB or AAIB they will tell you that over the decades the best way to prevent aircraft accidents generally is to remove the human pilot from as much of the flying as possible.

Of course there may be exceptions to that, but in most cases the accidents are caused by a combination of human errors that would have been avoided with correctly programmed automation.

T00ts

Quote from: Javert on November 06, 2020, 02:42:51 PM
So what is your position on this then?

Should we assume that you think these questions are so immoral to consider, that all self driving car research should be halted immediately?

Or should the choice be made randomly on the basis that any pre-ordained decision on such a matter would be immoral?

Or should the vehicle simply say "moral overload" on the screen and shut itself down in an attempt to avoid responsibility (which in face in most cases is the first answer in that test anyway and is an active decision)?

If a human being is faced with a similar decision, which of the above 3 should apply?  For example choosing which patient gets a heart transplant if only one heart is available - that's a real choice that real people have to make today, even if you don't want to know about it.

Yes I can see why you would take my view as extreme. In the case of a heart transplant it is indeed a choice qualified people have to take and they hopefully reason as to the best course of action. I trust them to make the best choice according to their ability. I have no problem with that. What irks me is the prospect of a machine being programmed to make choices where there is no chance of making a reasoned choice. Whatever programmes one creates there is no way unless everyone is microchipped to prove their value in the sight of the scientist/programmer, that said machine can be expected to replace the human. Why would sensible people allow where this will go?