A Weightloss and diet forum. WeightLossBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » WeightLossBanter forum » alt.support.diet newsgroups » Low Carbohydrate Diets
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

From The London Times -- Realistic Perspecitive of Recent CancerStudy



 
 
Thread Tools Display Modes
  #1  
Old November 3rd, 2007, 12:37 PM posted to alt.support.diet.low-carb
Jim
external usenet poster
 
Posts: 279
Default From The London Times -- Realistic Perspecitive of Recent CancerStudy

From The Times
November 3, 2007
Bacon. Be afraid? Or not very afraid?
http://www.timesonline.co.uk/tol/lif...cle2796330.ece

The sizzling debate over epistemology: can you still feel contented
about eating bacon?


The World Cancer Research Fund stigmatised bacon, along with other
processed meats
Nigel Hawkes, Health Editor

Millions of people are confused by health advice. It appears so
contradictory that the simplest thing is to disbelieve it all.

Nowhere is this truer than in advice over diet. This week the World
Cancer Research Fund stigmatised bacon – along with other processed
meats – by advising those who want to avoid cancer to cut it out of
their diet. What’s their beef? They have to be kidding, surely?

At issue here is the whole question of how we know what we know – what
philosophers call epistemology. So this is a page about epistemology, a
lovely word that seldom creeps into even a newspaper as upmarket as The
Times.

Where do all these claims about diet and health come from?

They come from studies launched by scientists to try to unravel the
causes of disease. We know that many diseases are caused by germs, but
thanks to vaccines and antibiotics most of these infectious diseases are
now under control. We are left with the diseases caused by age, diet and
lifestyle: principally heart disease and cancer, which between them are
the cause of more than half of all deaths in developed countries.

Hang on. You’ve just said that heart disease and cancer are caused by
age, diet or lifestyle, without any evidence. How do we know that?

Both are commoner in older people than younger ones. And both are
commoner in some communities than in others, while some lifestyle links
– between smoking and both cancer and heart disease, for example – have
been well proven. So it is certainly a valid hypothesis that there are
features of modern life and diet that contribute to disease and it is
worth trying to find out what they are.

What’s the best way to do that?

The best way would be the way that new medicines are tested, in a
double-blind placebo-controlled trial. One group would be fed on the
food under suspicion, the other given a matching but harmless placebo,
and they would be followed until they developed cancer, or died. Neither
group would know which they were getting, nor would those responsible
for running the trial, to avoid accidental bias. This is the gold
standard, but it’s entirely impracticable in most cases for dietary
studies in free-living human beings. Life’s too short, especially if you
are in the group randomised to bacon.

And the next best?

There are many alternative ways of studying dietary effects, generally
known as observational studies. They fall into two broad groups: cohort
studies, and case-control studies. They have their strengths, and
weaknesses.

An example, please?

The Framingham Study, based on a community in Massachusetts, and running
since 1948, is a classic cohort study. It has provided most of the
evidence doctors now rely on for estimating the effects of diet,
exercise and drugs such as aspirin, on the risk of heart disease.

In a cohort study a particular population – in this case just over 5,000
adults from Framingham – is closely examined at the start and details
taken of every physical variable (such as blood pressure and cholesterol
levels) as well as each individual’s diet, exercise and smoking habits.
The participants are then followed to some predetermined end-point
(death is the least ambiguous) and correlations drawn between the
variables measured and the cause of death.

And a case-control study?

In this case, researchers identify people who already have the disease
they are interested in – colon cancer, say - and compare them with
another set of people as nearly matched as possible. By questioning the
cases and the controls about their diet and lifestyle, they attempt to
tease out differences that may explain why one group developed cancer,
and the other didn’t.

Which is better?

Cohort studies are much the better, assuming that the data gathered at
the beginning are complete and reliable. But they are expensive, take a
very long time, and if you fail to ask at the beginning about some
variable that subsequently turns out to be important, it’s too late.

A sub-category of the cohort study is the nested case-control study.
This adopts the case-control methodology, but using participants whose
characteristics are already known because they are part of the cohort
study. It is quite a powerful tool.

Ordinary case-control studies are the commonest and, alas, the weakest.
They are inexpensive and get results fast, but are unreliable because
they rely on participants looking back and remembering how they ate and
how they lived years before they developed the disease. People’s
memories are poor, and are often influenced by what they think they
ought to say. This is known as recall bias.

Any other problems?

Lots. It’s impossible to be certain that the controls really match the
cases. In case-control studies of smoking or drinking, for example,
people who claim to be nonsmokers or nondrinkers may in fact be
ex-smokers or ex-drinkers, whose health was damaged before they gave up.
This is misclassification bias.

It is also difficult to rule out confounding factors, where an
association is found but does not prove anything. For example, in
observational studies people who take vitamin pills appear to suffer
less cancer and heart disease, but in double-blind trials, this benefit
disappears.

Why? Probably because vitamin-taking is simply a marker for people who
are health-conscious generally. The benefit comes from some other aspect
of their behaviour which cannot be adequately corrected for.

Finally, there are statistics. A result can claim to be statistically
significant if the odds of it arising by chance are one in 20. Those are
not especially long odds, so plenty of spurious results get published.

If these kinds of studies are so useless, why do people do them?

They are not useless, entirely. They are a good way of forming
hypotheses, and building up knowledge. They just fall a long way short
of proof.

Can’t we do better?

The obvious thing to do is to combine lots of studies together in a
meta-analysis. This is especially popular, as it is a desk job not
requiring any new research, or much in the way of grants. The strength
of the technique is that an accumulation of studies may have greater
statistical power to detect small effects, but its weaknesses comes from
selection bias (which studies are included and which aren’t) and its
close relation, publication bias (you can’t include unpublished studies,
which are usually the ones that show no effect).

How does all this relate to the WCRF study?

This was a meta-analysis that included studies of every sort, from
double-blind trials to observational studies. It was highly selective,
boiling down 500,000 papers to the final 7,000 that were used to draw
conclusions. So while it was a useful distillation of the literature, it
was no more than that. Another group might have chosen a different 7,000
papers, and reached different conclusions.

So should we chew bacon, or eschew it?

It’s unlikely, despite the WCRF, that occasional consumption of
processed meats will make any perceptible difference to an individual.
Across the population as a whole it may be detectable, but to an
individual a small change to a small risk is beneath the threshold of
detection. WCRF is on stronger ground when it advises people to stay
thin. Why do I say that? Oh, just a gut feeling.
  #2  
Old November 4th, 2007, 04:17 PM posted to alt.support.diet.low-carb
Cubit
external usenet poster
 
Posts: 653
Default From The London Times -- Realistic Perspecitive of Recent Cancer Study

However, questionable the research may be, from the selection of studies in
the research, it is interesting that they would point the finger at
*processed* meats. If this were just another Vegan inspired attack, they
would target all meats.

My intuition supports the idea of avoiding all Frankenfoods. -So, I like
this WCRF idea.

I work so hard at controlling my macronutrient ratios and calories, but I
have often neglected the care needed to avoid Frankenfoods. I have avoided
the 50 ingredient labels. Maybe I need to avoid the dozen ingredient labels
too.



"Jim" wrote in message
...
From The Times
November 3, 2007
Bacon. Be afraid? Or not very afraid?
http://www.timesonline.co.uk/tol/lif...cle2796330.ece

The sizzling debate over epistemology: can you still feel contented about
eating bacon?


The World Cancer Research Fund stigmatised bacon, along with other
processed meats
Nigel Hawkes, Health Editor

Millions of people are confused by health advice. It appears so
contradictory that the simplest thing is to disbelieve it all.

Nowhere is this truer than in advice over diet. This week the World Cancer
Research Fund stigmatised bacon – along with other processed meats – by
advising those who want to avoid cancer to cut it out of their diet. What’s
their beef? They have to be kidding, surely?

At issue here is the whole question of how we know what we know – what
philosophers call epistemology. So this is a page about epistemology, a
lovely word that seldom creeps into even a newspaper as upmarket as The
Times.

Where do all these claims about diet and health come from?

They come from studies launched by scientists to try to unravel the causes
of disease. We know that many diseases are caused by germs, but thanks to
vaccines and antibiotics most of these infectious diseases are now under
control. We are left with the diseases caused by age, diet and lifestyle:
principally heart disease and cancer, which between them are the cause of
more than half of all deaths in developed countries.

Hang on. You’ve just said that heart disease and cancer are caused by age,
diet or lifestyle, without any evidence. How do we know that?

Both are commoner in older people than younger ones. And both are commoner
in some communities than in others, while some lifestyle links – between
smoking and both cancer and heart disease, for example – have been well
proven. So it is certainly a valid hypothesis that there are features of
modern life and diet that contribute to disease and it is worth trying to
find out what they are.

What’s the best way to do that?

The best way would be the way that new medicines are tested, in a
double-blind placebo-controlled trial. One group would be fed on the food
under suspicion, the other given a matching but harmless placebo, and they
would be followed until they developed cancer, or died. Neither group
would know which they were getting, nor would those responsible for
running the trial, to avoid accidental bias. This is the gold standard,
but it’s entirely impracticable in most cases for dietary studies in
free-living human beings. Life’s too short, especially if you are in the
group randomised to bacon.

And the next best?

There are many alternative ways of studying dietary effects, generally
known as observational studies. They fall into two broad groups: cohort
studies, and case-control studies. They have their strengths, and
weaknesses.

An example, please?

The Framingham Study, based on a community in Massachusetts, and running
since 1948, is a classic cohort study. It has provided most of the
evidence doctors now rely on for estimating the effects of diet, exercise
and drugs such as aspirin, on the risk of heart disease.

In a cohort study a particular population – in this case just over 5,000
adults from Framingham – is closely examined at the start and details
taken of every physical variable (such as blood pressure and cholesterol
levels) as well as each individual’s diet, exercise and smoking habits.
The participants are then followed to some predetermined end-point (death
is the least ambiguous) and correlations drawn between the variables
measured and the cause of death.

And a case-control study?

In this case, researchers identify people who already have the disease
they are interested in – colon cancer, say - and compare them with another
set of people as nearly matched as possible. By questioning the cases and
the controls about their diet and lifestyle, they attempt to tease out
differences that may explain why one group developed cancer, and the other
didn’t.

Which is better?

Cohort studies are much the better, assuming that the data gathered at the
beginning are complete and reliable. But they are expensive, take a very
long time, and if you fail to ask at the beginning about some variable
that subsequently turns out to be important, it’s too late.

A sub-category of the cohort study is the nested case-control study. This
adopts the case-control methodology, but using participants whose
characteristics are already known because they are part of the cohort
study. It is quite a powerful tool.

Ordinary case-control studies are the commonest and, alas, the weakest.
They are inexpensive and get results fast, but are unreliable because they
rely on participants looking back and remembering how they ate and how
they lived years before they developed the disease. People’s memories are
poor, and are often influenced by what they think they ought to say. This
is known as recall bias.

Any other problems?

Lots. It’s impossible to be certain that the controls really match the
cases. In case-control studies of smoking or drinking, for example, people
who claim to be nonsmokers or nondrinkers may in fact be ex-smokers or
ex-drinkers, whose health was damaged before they gave up. This is
misclassification bias.

It is also difficult to rule out confounding factors, where an association
is found but does not prove anything. For example, in observational
studies people who take vitamin pills appear to suffer less cancer and
heart disease, but in double-blind trials, this benefit disappears.

Why? Probably because vitamin-taking is simply a marker for people who are
health-conscious generally. The benefit comes from some other aspect of
their behaviour which cannot be adequately corrected for.

Finally, there are statistics. A result can claim to be statistically
significant if the odds of it arising by chance are one in 20. Those are
not especially long odds, so plenty of spurious results get published.

If these kinds of studies are so useless, why do people do them?

They are not useless, entirely. They are a good way of forming hypotheses,
and building up knowledge. They just fall a long way short of proof.

Can’t we do better?

The obvious thing to do is to combine lots of studies together in a
meta-analysis. This is especially popular, as it is a desk job not
requiring any new research, or much in the way of grants. The strength of
the technique is that an accumulation of studies may have greater
statistical power to detect small effects, but its weaknesses comes from
selection bias (which studies are included and which aren’t) and its close
relation, publication bias (you can’t include unpublished studies, which
are usually the ones that show no effect).

How does all this relate to the WCRF study?

This was a meta-analysis that included studies of every sort, from
double-blind trials to observational studies. It was highly selective,
boiling down 500,000 papers to the final 7,000 that were used to draw
conclusions. So while it was a useful distillation of the literature, it
was no more than that. Another group might have chosen a different 7,000
papers, and reached different conclusions.

So should we chew bacon, or eschew it?

It’s unlikely, despite the WCRF, that occasional consumption of processed
meats will make any perceptible difference to an individual. Across the
population as a whole it may be detectable, but to an individual a small
change to a small risk is beneath the threshold of detection. WCRF is on
stronger ground when it advises people to stay thin. Why do I say that?
Oh, just a gut feeling.



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
London Marathon 2007 Rachael Reynolds General Discussion 8 July 18th, 2006 04:43 PM
Lunch in London Beverly General Discussion 12 August 22nd, 2005 07:17 PM
Day in London Foxy Weightwatchers 2 June 29th, 2004 11:41 PM
London Marathon - thinking of entering Jane Lumley Low Carbohydrate Diets 0 September 29th, 2003 05:16 PM


All times are GMT +1. The time now is 06:24 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 WeightLossBanter.
The comments are property of their posters.