More Boris lockdown breaches

Started by patman post, May 23, 2023, 09:32:02 PM

« previous - next »

0 Members and 11 Guests are viewing this topic.

BeElBeeBub

Quote from: Scott777 on June 20, 2023, 03:10:00 PM

No, it does not establish a floor.  As you agreed, there is variability.  These are estimates, and there should be a range of specificity, as with the blood tests.
It does establish the floor because the figure they take (99.9} is way below the commonly taken 95%confidence limit. 

If you estimate the specificity directly from 160/200k numbers you get something like 99.996% or something - and that figure will have some variability like 99.993 to 99.998% or something

So using a lower bound way below the directly observed worst case figure (because the specificity leaps up if you assume not every positive was a false one) is effectively a floor.


QuoteThe chance of getting exactly 160 positives may be small, but the chance of getting a low number is not minuscule.

The figures I looked at were for the chance of getting 160 *or less* results.  Again (from.memory) the chance of getting 160 or fewer positives when the specificity was 99% was something like 0.006%. Bear in mind the usual confidence limits are a 5% chance and sometimes a 1% chance.  This is several orders of magnitude lower again.

Again, all.of the above tells me you don't actually know about statistics or testing or the presentation of results.

These are the official figures. It's the bloody ONS, a professional statistical organisation. THE professional statistical organisation in the UK. You won't get more official figures than these.

The period.in question was not "the height of the pandemic". I don't know if your memory has gone but summer 2020 was the lull between the first and second waves.. infections were way down,  hospital admissions.were.way down, deaths were way down. 


Scott777

Quote from: BeElBeeBub on June 20, 2023, 11:14:45 AM
The study authors were able to say that the specificity was high, >99.9%, because in the course of their study there was a period of low prevalence.  During that period there were very few positive results -  that allows a floor for the specificity to be established.

No, it does not establish a floor.  As you agreed, there is variability.  These are estimates, and there should be a range of specificity, as with the blood tests.


Quote from: BeElBeeBub on June 20, 2023, 11:14:45 AMYes, it is an estimate and, as you point out, you can flip a coin 10x and get 9 heads.  That doesn't mean the chance of heads is 90%

However, as the number of tests (coin flips) increases the chance that you will get the erroneous 90% result decreases.  Flipping a coin 10x and getting 9 heads is not unheard of. Getting 90 out of 100 would be much harder.  900 out of 1000 extremely unlikely.

The chances of a test with a specificity of 99% being run 200,000 times and only getting 160 positives is minuscule.

The chance of getting exactly 160 positives may be small, but the chance of getting a low number is not minuscule.

I think this conversation has run its course.  The study does not state the range of specificity as you claim.  It makes a conditional statement only.  Finally, it would be interesting to compare this 6 week period in 2020 with the official statistics.  This is supposed to be the height of the 'pandemic', and yet has very low positive results.  How do you explain that?
Those princes who have done great things have held good faith of little account, and have known how to craftily circumvent the intellect of men.  Niccolò Machiavelli.

BeElBeeBub


Quote from: Scott777 on June 19, 2023, 04:00:47 PM
You inviting me to believe there's a separate study to determine the specificity.  (You do understand specificity has to be found by testing it, don't you?).  So if this isn't part of this study, where is the study to find it?  I would expect a reference to it in this study.
You aren't making sense.

The study was a long term repeating study looking at infection rates over time actross a cross section of the population.

The primary test method was PCR.  Other methods were used to tease out other data and cross check (the antibody test mentioned)

Obviously, one question that came up was "what is the specificity of the test (in this case PCR) that you are using?"

The study authors were able to say that the specificity was high, >99.9%, because in the course of their study there was a period of low prevalence.  During that period there were very few positive results -  that allows a floor for the specificity to be established.

Which they did, and it was in excess of 99.99% which they conservatively rounded to 99.9%


QuoteHere is the flaw in your understanding of specificity.  As you admit yourself, specificity can vary over time, and for many reasons.  For that exact reason, the more tests you do, the more accurate your estimate of specificity.  In your example, it's possible the next 1000 nuns gave 100 positive tests.  That then changes the overall estimate of specificity.  It's like tossing 100 coins.  We would guess there is more likely to be 50 heads and 50 tails, but it's unlikely we would get that.  As we continue to toss more coins, we gradually approach the expected 50/50 proportion, unless the coin is weighted or biased due to imperfections.  If you just pick a few tosses which happen to be all heads, you CANNOT assume they will always be heads.

The specificity is ONLY an estimate.  That is why the blood tests were all given as a range.  So you need to explain why this PCR test has no range, why scenario 2 is omitted, why the specificity is not clearly stated anywhere except for scenario 1, and why no reference is made to some other study to find this specificity.  Your explanation does not add up.
Yes, it is an estimate and, as you point out, you can flip a coin 10x and get 9 heads.  That doesn't mean the chance of heads is 90%

However, as the number of tests (coin flips) increases the chance that you will get the erroneous 90% result decreases.  Flipping a coin 10x and getting 9 heads is not unheard of. Getting 90 out of 100 would be much harder.  900 out of 1000 extremely unlikely. 

The chances of a test with a specificity of 99% being run 200,000 times and only getting 160 positives is minuscule.


Given the observed results the authors concluded that the specificity was much greater than 99.9% (they state this is what they did).  

In effect the range is 99.9-100% to a confidence limit (much) greater than 95% (please, if you don't understand confidence limits say so, don't go misinterpreting that 95% figure).  As they then decided to use the lower limit of that range there was no reason to continually state the 99.9-100% as any deviation of the true figure from 99.9% would be upwards and thus only improve the confidence in the overall conclusions.  In effect they used 99.9% as the worst case because they were very confident that actual value was higher.




Scott777

Quote from: BeElBeeBub on June 19, 2023, 07:22:39 AM
The specificity isn't for the study. It's for the test. In this case the PCR test.

You inviting me to believe there's a separate study to determine the specificity.  (You do understand specificity has to be found by testing it, don't you?).  So if this isn't part of this study, where is the study to find it?  I would expect a reference to it in this study.

Quote from: BeElBeeBub on June 19, 2023, 07:22:39 AM
The.sugnificance of the July/Sept results is it shows the specificity of the test is high.

Let's try this.

Say you have developed a pregnancy test and you want to know the false positive rate.

You give the test to 1000 nuns and get 5 positive results

Now it is possible those 5 results are real (virgin birth, naughty nuns etc)

However,.if you assume those 5 results are false you calculate your specificity from that as least 99.995%

It could be higher if some of those 5 pregnancies were actual pregnancies.

That is pretty much the textbook way to determine specificity

That is why the July/sept test results are important. It sets a.lower bound for specificity. There is zero chance you could get such a result (160/200k) if the specificity was less than 99.9%.

Yes the specificity can vary over time, you could have a bad batch of chemicals or poor swabbing or any number of things.


Here is the flaw in your understanding of specificity.  As you admit yourself, specificity can vary over time, and for many reasons.  For that exact reason, the more tests you do, the more accurate your estimate of specificity.  In your example, it's possible the next 1000 nuns gave 100 positive tests.  That then changes the overall estimate of specificity.  It's like tossing 100 coins.  We would guess there is more likely to be 50 heads and 50 tails, but it's unlikely we would get that.  As we continue to toss more coins, we gradually approach the expected 50/50 proportion, unless the coin is weighted or biased due to imperfections.  If you just pick a few tosses which happen to be all heads, you CANNOT assume they will always be heads.

The specificity is ONLY an estimate.  That is why the blood tests were all given as a range.  So you need to explain why this PCR test has no range, why scenario 2 is omitted, why the specificity is not clearly stated anywhere except for scenario 1, and why no reference is made to some other study to find this specificity.  Your explanation does not add up.
Those princes who have done great things have held good faith of little account, and have known how to craftily circumvent the intellect of men.  Niccolò Machiavelli.

BeElBeeBub

And you still haven't explained why the specificity of S2 matters when it is, by definition, made up value.

If they ran scenario 3 with a specificity of 50% that would only tell you the effect of a low specificity on the conclusions and not mean the specificity was actually 50%

BeElBeeBub

Quote from: Scott777 on June 18, 2023, 10:44:19 PM
But they didn't ONLY conduct those 208,730 tests.  Take note of this: "For example, in the six-week period from 31 July to 10 September 2020..." .  I presume you understand what "for example" means, although I may be asking too much.  You cannot simply pick an example period (6 weeks) from the data, and decide what the minimum specificity must be.  That's not how it works.  For example, maybe the following 6 week period came up with many positives, which may have been false.  To be precise, the study is only stating that specificity would be at least 99.9% if they only studied 31 July to 10 September.  The article does not state that the specificity for the whole study was 99.9%.

The specificity isn't for the study. It's for the test. In this case the PCR test.
You maintain the specificity for the.PCR test is low enough that false positives are distorting the results.

The.sugnificance of the July/Sept results is it shows the specificity of the test is high.

Let's try this.

Say you have developed a pregnancy test and you want to know the false positive rate.

You give the test to 1000 nuns and get 5 positive results

Now it is possible those 5 results are real (virgin birth, naughty nuns etc)

However,.if you assume those 5 results are false you calculate your specificity from that as least 99.995%

It could be higher if some of those 5 pregnancies were actual pregnancies.

That is pretty much the textbook way to determine specificity

That is why the July/sept test results are important. It sets a.lower bound for specificity. There is zero chance you could get such a result (160/200k) if the specificity was less than 99.9%.

Yes the specificity can vary over time, you could have a bad batch of chemicals or poor swabbing or any number of things.

That is why things like tests are quality controlled, with surveillance to check if results are out of tolerance, batch testing, lot marking, cross testing with other labs etc.  

Any drift significant enough to shift from the near 100% to below 99.9% would be detectable by the various QA procedures.


Scott777

Quote from: BeElBeeBub on June 17, 2023, 06:39:10 PM
https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/methodologies/covid19infectionsurveypilotmethodsandfurtherinformation#test-sensitivity-and-specificity

They conducted over 200k PCR tests and only got 159 positive results.  Even if you assume every single one of those results was a false positive that is 99.9%

You are also confusing the scenarios - which paper examples using made up sensitivity and specificity values to illustrate the sensitivity or otherwise the actual results to variations in those values

But they didn't ONLY conduct those 208,730 tests.  Take note of this: "For example, in the six-week period from 31 July to 10 September 2020..." .  I presume you understand what "for example" means, although I may be asking too much.  You cannot simply pick an example period (6 weeks) from the data, and decide what the minimum specificity must be.  That's not how it works.  For example, maybe the following 6 week period came up with many positives, which may have been false.  To be precise, the study is only stating that specificity would be at least 99.9% if they only studied 31 July to 10 September.  The article does not state that the specificity for the whole study was 99.9%.
Those princes who have done great things have held good faith of little account, and have known how to craftily circumvent the intellect of men.  Niccolò Machiavelli.

Scott777

Quote from: BeElBeeBub on June 17, 2023, 06:39:10 PM
Do you read anything or just make it up.on your head?

Says the gaslighting troll who insisted the specificity was the same for both scenarios, ignoring the statement that it was different.  🤣
Those princes who have done great things have held good faith of little account, and have known how to craftily circumvent the intellect of men.  Niccolò Machiavelli.

BeElBeeBub

Quote from: Scott777 on June 17, 2023, 09:46:15 AM
What evidence it is at least 99.9% ? That's not stated in the study, unless you are only looking at scenario 1.

There is a reason for scenario 2.  "To allow for the fact that individuals are self-swabbing, Scenario 2 assumes a lower overall sensitivity rate..." .  You see, with self-swabbing, the data is less reliable.  They have to take that into account.  Therefore, you cannot just ignore scenario 2.  As I have said, the only indication of specificity for that is the evaluation, (98.1% to 99.5%).
Sweet Jesus.


Do you read anything or just make it up.on your head?

Right, here is the link (for the 3rd.time I think). Go read it.

https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/methodologies/covid19infectionsurveypilotmethodsandfurtherinformation#test-sensitivity-and-specificity

And here is the really important bit

QuoteTest sensitivity

Test sensitivity measures how often the test correctly identifies those who have the virus, so a test with high sensitivity will not have many false-negative results. Studies suggest that sensitivity may be somewhere between 85% and 98%.

Our study involves participants self-swabbing, where there is the possibility that some participants may collect the swab sample incorrectly, which could lead to more false-negative results. However, since national testing programmes started in August 2020, most people in the UK became familiar with taking nose and throat swabs themselves.

Test specificity

Test specificity measures how often the test correctly identifies those who do not have the virus, so a test with high specificity will not have many false-positive results.

We know the specificity of our test must be very close to 100% as the low number of positive tests in our study over the summer of 2020 means that specificity would be very high even if all positives were false. For example, in the six-week period from 31 July to 10 September 2020, 159 of the 208,730 total samples tested positive. Even if all these positives were false, specificity would still be above 99.9%.

Now to clear things up, they are talking about the PCR test because a) they talk about swabbing (which you don't do for blood) and b) they say they are using PCR earlier


QuoteNose and throat swabs
The nose and throat swabs are sent to the Lighthouse Laboratory in Glasgow. They are tested for SARS-CoV-2 using reverse transcriptase polymerase chain reaction (RT-PCR). This is an accredited test that is also used within the national testing programme.

So, to be clear

They conducted over 200k PCR tests and only got 159 positive results.  Even if you assume every single one of those results was a false positive that is 99.9%

You are getting confused between the PCR tests and the blood antibody tests.

You are also confusing the scenarios - which paper examples using made up sensitivity and specificity values to illustrate the sensitivity or otherwise the actual results to variations in those values


Scott777

Quote from: BeElBeeBub on June 16, 2023, 08:38:55 PM
Ok., the S2 specificity wasn't 99.9% (🤞).....

Now, what do you think it was and why is the specificity chosen for a model important, when we have direct evidence the specificity is at least 99.9% and probably better than 99.99%


What evidence it is at least 99.9% ? That's not stated in the study, unless you are only looking at scenario 1.

There is a reason for scenario 2.  "To allow for the fact that individuals are self-swabbing, Scenario 2 assumes a lower overall sensitivity rate..." .  You see, with self-swabbing, the data is less reliable.  They have to take that into account.  Therefore, you cannot just ignore scenario 2.  As I have said, the only indication of specificity for that is the evaluation, (98.1% to 99.5%).
Those princes who have done great things have held good faith of little account, and have known how to craftily circumvent the intellect of men.  Niccolò Machiavelli.

patman post

Quote from: BeElBeeBub on June 16, 2023, 09:32:29 PM
And the bias or otherwise of the committee doesn't impact the evidence, much under oath or public record

The "how could we know what's in somebody's.mind" argument cuts no ice.

It is common in legal cases with a much higher evidentiary test to infer intent from actions and circumstances.

Murder convictions require intent, if the defendant doesn't admit they intended to kill the victim, there is no direct way to prove they did. 

According to Johnson and his supporters, that should mean no murder conviction is possible without a confession.

And yet we do have convictions in those circumstances.

Because it is accepted that it is possible to infer intent from actions and circumstances

If a man digs a pit in his garden, insures his wife's life for £5millon and googles "how to dispose of a body" a week.before she is found dead in said pit - it is reasonable to infer intent even though the defendant claims she died accidentally after a row and he simply panicked.

So it is with Johnson.  The circumstances and evidence are such that any reasonable person would not have been able to make the statements he made to parliament without reckless disregard for truth.

Given Johnson's history and testimony from acquaintances, it is an open question as to whether Johnson actually makes a distinction in his mind between lies and truth.

It is said he simply believes whatever is most convenient for him to believe at any given moment.

In that case it may be possible to argue that he didn't *intend* to mislead parliament, but his inability to distinguish between truth and falsehood simply meant he was unable to tell the truth.

I don't think that is a great defence. Such a person should never be permitted to be an MP legal alone a minister.
Johnson fought to join the club. The club didn't like the way he behaved, so it began procedures to censure the erring member. Because that club did the "decent thing" and let Johnson have a look at its findings, the disgraced member released parts of the inquiry's findings to chosen "confidents", and wrapped them up in his "pretaliation".

As a tory, my feelings are: the sooner this guy's toast the better for the UK...
On climate change — we're talking, we're beginning to act, but we're still not doing enough...

BeElBeeBub

Quote from: patman post on June 16, 2023, 08:49:39 PM
But it wasn't a court case. It was an inquiry run under parliamentary rules And Johnson attempted to bully and terrorise the mostly Conservative inquiry members, and when his blustering didn't look like working he attacked the inquiry's members.

The guys a Trump. He's a blusterer. He's a bully. Who cares if he can keep his breeches buttoned. The guy's a boil on the rapidly rotting rump of Westminster politics...
And the bias or otherwise of the committee doesn't impact the evidence, much under oath or public record 

The "how could we know what's in somebody's.mind" argument cuts no ice.

It is common in legal cases with a much higher evidentiary test to infer intent from actions and circumstances.

Murder convictions require intent, if the defendant doesn't admit they intended to kill the victim, there is no direct way to prove they did.  

According to Johnson and his supporters, that should mean no murder conviction is possible without a confession.

And yet we do have convictions in those circumstances.

Because it is accepted that it is possible to infer intent from actions and circumstances 

If a man digs a pit in his garden, insures his wife's life for £5millon and googles "how to dispose of a body" a week.before she is found dead in said pit - it is reasonable to infer intent even though the defendant claims she died accidentally after a row and he simply panicked.

So it is with Johnson.  The circumstances and evidence are such that any reasonable person would not have been able to make the statements he made to parliament without reckless disregard for truth.

Given Johnson's history and testimony from acquaintances, it is an open question as to whether Johnson actually makes a distinction in his mind between lies and truth. 

It is said he simply believes whatever is most convenient for him to believe at any given moment.

In that case it may be possible to argue that he didn't *intend* to mislead parliament, but his inability to distinguish between truth and falsehood simply meant he was unable to tell the truth.

I don't think that is a great defence. Such a person should never be permitted to be an MP legal alone a minister.

patman post

Quote from: Nick on June 16, 2023, 05:51:27 AM
The report is tainted because the chairman was giving statements telling everyone what the outcome would be before the committee even sat. If it was a court case it would have been thrown out.
But it wasn't a court case. It was an inquiry run under parliamentary rules And Johnson attempted to bully and terrorise the mostly Conservative inquiry members, and when his blustering didn't look like working he attacked the inquiry's members.

The guys a Trump. He's a blusterer. He's a bully. Who cares if he can keep his breeches buttoned. The guy's a boil on the rapidly rotting rump of Westminster politics...
On climate change — we're talking, we're beginning to act, but we're still not doing enough...

BeElBeeBub

Quote from: Scott777 on June 16, 2023, 04:50:47 PM
I have a better idea.  You admit you was wrong, S2 has a different specificity.  Then we move on.
Ok., the S2 specificity wasn't 99.9% (🤞).....

Now, what do you think it was and why is the specificity chosen for a model important, when we have direct evidence the specificity is at least 99.9% and probably better than 99.99%

BeElBeeBub

Quote from: Scott777 on June 16, 2023, 04:52:43 PM
Indeed he is.  Can you name a politician who isn't?
How many MPs have been found in contempt of parliament because they lied to it?  I don't think any.

Almost all. The exception being Rees-Mogg, though that is slightly different as it was a court who held he had lied, and it was to the queen and not parliament.

There is also the matter that Johnson has been fired from 2 previous jobs for lying. Once as a journalist for making up quotes and once as a minster for lying about an affair to the PM.