Uncategorized

Image result for pills wikipedia Nowadays there is tremendous concern about the spread of antibiotic resistant bacteria  or "superbugs" throughout the world. Articles frequently mention India being at the epicenter of this crisis - that is, the source of many antibiotic resistant strains (both in and out of hospitals), which then travel throughout the world due to global travel. The massive overuse and misuse of antibiotics (whether in humans, animals, and even crops) is usually given as the major reason for the development of antibiotic resistant strains of bacteria (here, here, and here).

Thus the following article about unchecked pollution from pharmaceutical companies in India fueling the creation of deadly superbugs was shocking to read - and it may explain why the problem is so severe there. Note that the Indian companies supply just about all the world's major drug companies with antibiotics and anti-fungals. It appears that the companies are ignoring local laws (which have been called "toothless") which would cut down on the pollution. What is stressed in the article is that one of the world’s biggest drug production hubs (the Indian city of Hyderabad) is producing dangerous levels of pharmaceutical pollution, and the international agencies that ensure drug safety are basically ignoring this problem (and doing little to address it).

Thousands of tons of pharmaceutical waste is produced each day by the many pharmaceutical companies in Hyderabad, India, which is then contaminating the water sources in the area. With the result that water samples (from rivers, lakes, groundwater, drinking water, surface water, treated sewage water) in  that area contain bacteria and fungi resistant to multiple drugs (superbugs), and these superbugs then get spread to humans throughout India and eventually globally.   This article is definitely worth reading in its entirety. Excerpts from The Bureau of Investigative Journalism:

Big Pharma's Pollution Is Creating Deadly Superbugs While The World Looks The Other Way

Industrial pollution from Indian pharmaceutical companies making medicines for nearly all the world’s major drug companies is fueling the creation of deadly superbugs, suggests new research. Global health authorities have no regulations in place to stop this happening. A major study published today in the prestigious scientific journal Infection found “excessively high” levels of antibiotic and antifungal drug residue in water sources in and around a major drug production hub in the Indian city of Hyderabad, as well as high levels of bacteria and fungi resistant to those drugs. Scientists told the Bureau the quantities found meant they believe the drug residues must have originated from pharmaceutical factories.

The presence of drug residues in the natural environment allows the microbes living there to build up resistance to the ingredients in the medicines that are supposed to kill them, turning them into what we call superbugs. The resistant microbes travel easily and have multiplied in huge numbers all over the world, creating a grave public health emergency that is already thought to kill hundreds of thousands of people a year.

When antimicrobial drugs stop working common infections can become fatal, and scientists and public health leaders say the worsening problem of antibiotic resistance (also known as AMR) could reverse half a century of medical progress if the world does not act fast. Yet while policies are being put into place to counter the overuse and misuse of drugs which has propelled the crisis, international regulators are allowing dirty drug production methods to continue unchecked. Global authorities like the Food and Drug Administration and the European Medicines Agency strictly regulate drug supply chains in terms of drug safety - but environmental standards do not feature in their rulebook. Drug producers must adhere to Good Manufacturing Practices (GMP) guidelines - but those guidelines do not cover pollution.

The international bodies say the governments of the countries where the drugs are made are the ones responsible for stopping pollution - but domestic legislation is having little impact on the ground, say the study's authors. The lack of international regulation must be addressed, they argue, highlighting the grave public health threat faced by antibiotic resistance as well as the rampant global spread of superbugs from India, which has become an epicentre of the crisis.

A group of scientists based at the University of Leipzig worked with German journalists to take an in-depth look at pharmaceutical pollution in Hyderabad, where 50% of India’s drug exports are produced. A fifth of the world’s generic drugs are produced in India, with factories based in Hyderabad supplying Big Pharma and public health authorities like World Health Organisation with millions of tons of antibiotics and antifungals each year.

The researchers tested 28 water samples in and around the Patancheru-Bollaram Industrial zone on the outskirts of the city, where more than than 30 drug manufacturing companies supplying nearly all the world’s major drug companies are based. Thousands of tons of pharmaceutical waste are produced by the factories each day, the paper says. Almost all the samples contained bacteria and fungi resistant to multiple drugs (known as MDR pathogens, the technical name for superbugs). Researchers then tested 16 of the samples for drug residues and found 13 of them were contaminated with antibiotics and antifungals. Previous studies have shown how exposure to antibiotics and antifungals in the environment causes bacteria and fungi to develop immunity to those drugs.

Environmental pollution and poor management of wastewater in Hyderabad is causing “unprecedented antimicrobial drug contamination” of surrounding water sources, conclude the researchers - contamination which appears to be driving the creation and spread of dangerous superbugs which have spread across the world. Combined with the mass misuse of antibiotics and poor sanitation, superbugs are already having severe consequences in India - an estimated 56,000 newborn babies die from resistant infections there each year.

The companies in question strongly deny that their factories pollute the environment, and the sheer number of factories operating in Hyderabad means it is impossible to identify exactly which companies are responsible for the contamination found in the samples tested. What is clear is one of the world’s biggest drug production hubs is producing dangerous levels of pharmaceutical pollution, and the international bodies tasked with ensuring drug safety are doing little to address it.

Around 170 companies making bulk drugs like antibiotics operate in and around Hyderabad, the majority clustered in sprawling industrial estates along the banks of the Musi river. Companies in Europe and the US, as well as health authorities like WHO and the UK’s NHS are reliant on drugs being produced in these factories.

The area has long been criticised for its pollution, which has continued unabated despite decades of campaigning by Indian NGOs, say the report authors. In 2009 the Patancheru-Bollaram zone was classified as “critically polluted” in India’s national pollution index and construction in the area was banned. But the government relaxed the rules in 2014 and building was allowed to begin again. Last year India’s Supreme Court ordered the country’s pharmaceutical companies to operate a zero liquid waste policy, but “massive violations” have reportedly occurred, says the Infection report....India has become the epicentre of the global drug resistance crisis, with 56,000 newborn Indian babies estimated to die each year from drug-resistant blood infections, and 70 to 90% of people who travel to India returning home with multi-drug-resistant bacteria in their gut, according to the study.

Researchers took water samples from rivers, lakes, groundwater, drinking water and surface water from rural and urban areas in and around the industrial estate, as well as pools near factories and water sources contaminated by sewage treatment plants. Four were taken from taps, one from a borehole, and the remaining 23 were classed as environmental samples. The samples were tested for bacteria resistant to multiple drugs (known as MDR pathogens, the technical name for superbugs). The researchers then tested 16 of the samples for the antibiotics and antifungals used to treat infections. All samples apart from one taken from tap water at a four star hotel were found to contain drug-resistant bacteria. All 23 environmental samples contained carbapenemase-producing bacteria - a group of bugs dubbed the “nightmare bacteria” because they are virtually untreatable and kill 40-50% of people whose blood gets infected with them.

Of the 16 samples then tested for drug residue, 13 were found to be contaminated with antibiotics and antifungals, some in disturbingly high levels. The researchers compared the levels of residue to limits recommended by leading microbiologists; once levels exceed those limits it is likely that superbugs will develop. The amounts of antimicrobials found in the new tests were “eye-wateringly high”, said Dr Mark Holmes, a microbiologist at the University of Cambridge. “The quantities involved mean the amount in the water is almost the same as a therapeutic dose,” he said, calling on the Indian authorities to investigate immediately by testing each factory’s effluent. 

There are reams of regulations and stipulations that manufacturers have to adhere to in order to export their products to the US and Europe – known as the Good Manufacturing Practices (GMP) framework. These focus on making sure drugs are safe, pure, and effective. Stringent inspections by the FDA, WHO and European authorities check that these rules are being followed. However these regulations do not address environmental concerns. Inspectors have no mandate to sanction a factory for polluting, failing to treat its waste or other environmental problems – this falls within the remit of local governments.

 Uh-oh... looks like any benefits of moderate alcohol consumption may not extend to the brain, at least in men. A  recent study found that moderate alcohol consumption over the course of a 30 years was associated with increased odds of hippocampal atrophy (brain damage in the hippocampus of the brain) - when compared to abstainers. Hippocampal atrophy causes memory problems and affects spatial navigation, and is also an early characteristic of Alzheimer's disease and other dementias. This result occurred in a dose dependent fashion - meaning the more that was drunk regularly, the more the atrophy in that area of the brain.

The heavier drinkers (when compared to abstainers) also had a faster decline in verbal skills ('verbal fluency") and changes in the white matter of the brain (specifically "corpus callosum microstructure"). There was no protective effect of light drinking when compared to abstainers (the 2 groups had similar results). **However, the researchers also reported: "The hippocampal atrophy associations we found in the total sample were replicated in men alone but not in women." Note: there were few women in the study (only 103 out of 527 studied) and even fewer were "heavy" drinkers (14 women), but one wonders - why not? Why didn't women drinkers have these brain changes?**

So how much did the moderate drinkers drink? They really didn't drink that much, but there were different groups: the abstainers (less than 1 unit of alcohol a week), “light” drinking was between 1 and <7 units, “moderate” drinking as 7 to <14 units a week for women and 7 to <21 units for men, and the heavier drinkers - those that drank more units per week, for an average of 30 units a week. What is a "unit" of alcohol? A medium glass of wine has about two units of alcohol, and so does a pint of  ordinary strength beer or lager. Thus the male moderate drinkers drank about a medium glass of wine or a beer each night, and maybe a little extra on the weekends. (In other words, not that much.) And the heaviest drinkers had a little more than two medium glasses of wine or two beers every night of the week, plus a little more on weekends.

What do the results mean?  The researchers said that  they don't have any evidence linking the brain changes they saw on the MRI brain scans to any negative general cognitive effects, but they did lose more "language fluency" with time. (They gave the people various tests.) The abstainer group (37 people) was very small - perhaps other lifestyle factors (e.g., nutrition) may be playing a part in the results. Also, if people  under-reported actual alcohol consumption - then it would throw off the results. While studies show that drinking can increase cancer risk, other studies have found that moderate drinkers seem to live longer than abstainers. From Medical Xpress:

Even moderate drinking linked to a decline in brain health, finds study

Alcohol consumption, even at moderate levels, is associated with increased risk of adverse brain outcomes and steeper decline in cognitive (mental) skills, finds a study published by The BMJ today. Heavy drinking is known to be associated with poor brain health, but few studies have examined the effects of moderate drinking on the brain—and results are inconsistent. So a team of researchers based at the University of Oxford and University College London set out to investigate whether moderate alcohol consumption has a beneficial or harmful association—or no association at all—with brain structure and function.

They used data on weekly alcohol intake and cognitive performance measured repeatedly over 30 years (1985-2015) for 550 healthy men and women who were taking part in the Whitehall II study. This study is evaluating the impact of social and economic factors on the long term health of around 10,000 British adults. Participants had an average age of 43 at the start of the study and none were alcohol dependent. Brain function tests were carried out at regular intervals and at the end of the study (2012-15), participants underwent an MRI brain scan.... After adjusting for these confounders, the researchers found that higher alcohol consumption over the 30 year study period was associated with increased risk of hippocampal atrophy - a form of brain damage that affects memory and spatial navigation.

While those consuming over 30 units a week were at the highest risk compared with abstainers, even those drinking moderately (14-21 units per week) were three times more likely to have hippocampal atrophy compared with abstainers. There was no protective effect of light drinking (up to 7 units per week) over abstinence.

Higher consumption was also associated with poorer white matter integrity (critical for efficient cognitive functioning) and faster decline in language fluency (how many words beginning with a specific letter can be generated in one minute). But no association was found with semantic fluency (how many words in a specific category can be named in one minute) or word recall. The authors point out that this is an observational study, so no firm conclusions can be drawn about cause and effect, and say some limitations could have introduced bias. [Original study.]

T-Cells and cancer Everyone has heard about the miraculous stories of recovery from cancer using immunotherapy. Immunotherapy involves giving the sick person substances which stimulate the person's own immune system to battle the cancer. And when it works, it's wonderful. But...there's another side - a dark side of the harms from these treatments - that is rarely discussed, and why we should be very careful going forward.

Here are two articles that do point out the problems - such as in some people the immunotherapy actually accelerates the cancer being treated (called "hyperprogression of tumors"). Studies suggest that these people share certain genetic characteristics or they are over the age of 65. In general, many patients undergoing immunotherapy have side-effects, some even developing life-threatening ones, from the treatments. Also, most patients do not respond to the immunotherapy treatments, for reasons that remain largely unknown. Obviously more studies are needed. Remember, this field is in its infancy.

Excerpts from Bob Tedeschi's article, from STAT: Cancer researchers worry immunotherapy may hasten growth of tumors in some patients

For doctors at the University of California, San Diego, it was seemingly a no-lose proposition: A 73-year-old patient’s bladder cancer was slowly progressing but he was generally stable and strong. He seemed like the ideal candidate for an immunotherapy drug, atezolizumab, or Tecentriq, that had just been approved to treat bladder cancer patients. Doctors started the patient on the drug in June. It was a spectacular failure: Within six weeks, he was removed from the drug, and he died two months later.

In a troubling phenomenon that researchers have observed in a number of cases recently, thetreatment appeared not only to fail to thwart the man’s cancer, but to unleash its full fury. It seemed to make the tumor grow faster. The patient’s case was one of a handful described last week in the journal Clinical Cancer Research. Of the 155 cases studied, eight patients who had been fairly stable before immunotherapy treatment declined rapidly, failing the therapy within two months. Six saw their tumors enter a hyperactive phase, where the tumors grew by between 53 percent and 258 percent.

“There’s some phenomenon here that seems to be true, and I think we cannot just give this therapy randomly to the patient,” the author of the study, Dr. Shumei Kato, an oncologist at UC San Diego, said in an interview with STAT. “We need to select who’s going to be on it.” .... But similar findings were published last year by cancer researchers at the Gustave Roussy Institute in France. These results were considered controversial by some, since they hadn’t been widely confirmed by other oncologists.

In the latest Clinical Cancer Research findings, those who experienced the hyperprogression of tumors, as the phenomenon is known, shared specific genetic characteristics. In all six patients with so-called amplifications in the MDM2 gene family, and two of 10 patients with alterations in the EGFR gene, the anti-PD-1 or anti-PD-L1 immunotherapies quickly failed, and the patients’ cancers progressed rapidly. ....Doctors who prescribe immunotherapies may be able to identify at-risk patients by submitting tumors for genetic testing, Kato and his coauthors suggested.

The findings published last year by the Gustave Roussy team also appeared in Clinical Cancer Research. In that study, of 131 patients, 12 patients, or 9 percent, showed hyperprogressive growth after taking anti-PD-1 or anti-PD-L1 immunotherapies. The lead author of that study, Stephane Champiat, acknowledged that the research so far raises more questions than it answers. ... Champiat suggested factors that could be associated with the effect. In his study’s patients, for instance, those who were older than 65 showed hyperprogressive growth at twice the rate of younger patients.

Oncologists studying this phenomenon said it could complicate treatment strategies, becausesome patients who receive immunotherapies can exhibit what’s known as “pseudo-progression,” in which tumor scans reveal apparent growth. In reality, however, the scans are instead showing areas where the cancer is being attacked by armies of immune cells. Roughly 10 percent of melanoma patients on immunotherapies, for instance, experience this phenomenon.

Jimmy Carter is perhaps the best-known immunotherapy success story. But most patients do not respond to the immunotherapy treatments, for reasons that remain largely unknown. In a study by Prasad and Dr. Nathan Gay, also of Oregon Health and Science University, nearly 70 percent of Americans die from forms of cancer for which there is no immunotherapy option, and for the rest who do qualify forimmunotherapy, only 26 percent actually see their tumors shrink.

And while immunotherapies typically include less intrusive side effects than chemotherapy, those side effects, when they happen, can be life-threatening. Researchers have reported cases in which immunotherapies attacked vital organs, including the colon, liver, lungs, kidney, and pancreas, with some patients experiencing acute, rapid-onset diabetes after receiving the treatments. But in those cases, the treatments were at least attacking the cancer. Such reports didn’t raise the specter of these treatments possibly working on the cancer’s behalf to shift it into overdrive.

 T-Cells and cancer A group of killer T cells (green and red) surrounds a cancer cell (blue, center). Credit: NIH.

From Health News Review: Cancer immunotherapy: more reason for concern?

Immunotherapy for cancer — which basically involves manipulating our immune system to attack cancer cells — is ever so slowly creeping toward the scrutiny phase. This slow crawl is something we reported on four months ago. With few exceptions, like this deep dive by the New York Times (the Cell Wars series ), we found many journalists completely overlooked the harms (some life-threatening) of this oft-vaunted treatment.

Image result for teeth wikipedia Interesting new study! Researchers analyzed baby teeth among twins - sets of twins where both are healthy, and sets of twins where one has autism spectrum disorder (ASD), but not the other twin (the control). They found that in the children that developed ASD, the teeth revealed that during the second and third trimester and 30 weeks after birth they had higher levels of lead (which is a neurotoxin) and lower levels of the essential nutrients manganese and zinc. There were also differences between ASD and controls in levels of other elements including tin, strontium and chromium - but each of these elements differed the most between the ASD and control twin at different points of time.

How many people know that during fetal and childhood development, a new tooth layer is formed every week or so in the developing baby, which leaves an "imprint" in the tooth layer of the chemicals exposed to? So there's a chronological record of exposure - similar to using growth rings on a tree to find out the tree's growth history. A laser removed a tiny bit of the tooth dentine layer and then it was analyzed for various metals (see the illustration below).

Now studies are needed to determine whether the differences in the amount of lead and metals are due to differences in how much a fetus or baby is exposed to them, or whether it occurs because of a genetic difference in how a baby takes in and handles these metals and nutrients. And, of course, other studies suggest that other environmental exposures (e.g., pesticides) may also play a part in ASD development. From Science Daily:

Exposure to specific toxins and nutrients during late pregnancy and early life correlate with autism risk

Using evidence found in baby teeth, researchers from The Senator Frank R. Lautenberg Environmental Health Sciences Laboratory and The Seaver Autism Center for Research and Treatment at Mount Sinai found that differences in the uptake of multiple toxic and essential elements over the second and third trimesters and early postnatal periods are associated with the risk of developing autism spectrum disorders (ASD), according to a study published June 1 in the journal Nature Communications.

The critical developmental windows for the observed discrepancies varied for each element, suggesting that systemic dysregulation of environmental pollutants and dietary elements may serve an important role in ASD. In addition to identifying specific environmental factors that influence risk, the study also pinpointed developmental time periods when elemental dysregulation poses the biggest risk for autism later in life.

According to the U.S. Centers for Disease Control and Prevention, ASD occurs in 1 of every 68 children in the United States. The exact causes are unknown, but previous research indicates that both environmental and genetic causes are likely involved. While the genetic component has been intensively studied, specific environmental factors and the stages of life when such exposures may have the biggest impact on the risk of developing autism are poorly understood. Previous research indicates that fetal and early childhood exposure to toxic metals and deficiencies of nutritional elements are linked with several adverse developmental outcomes, including intellectual disability and language, attentional, and behavioral problems.

"We found significant divergences in metal uptake between ASD-affected children and their healthy siblings, but only during discrete developmental periods," said Manish Arora, PhD, BDS, MPH, Director of Exposure Biology at the Senator Frank Lautenberg Environmental Health Sciences Laboratory at Mount Sinai and Vice Chair and Associate Professor in the Department of Environmental Medicine and Public Health at the Icahn School of Medicine at Mount Sinai. "Specifically, the siblings with ASD had higher uptake of the neurotoxin lead, and reduced uptake of the essential elements manganese and zinc, during late pregnancy and the first few months after birth, as evidenced through analysis of their baby teeth. Furthermore, metal levels at three months after birth were shown to be predictive of the severity of ASD eight to ten years later in life."

To determine the effects that the timing, amount, and subsequent absorption of toxins and nutrients have on ASD, Mount Sinai researchers used validated tooth-matrix biomarkers to analyze baby teeth collected from pairs of identical and non-identical twins, of which at least one had a diagnosis of ASD. They also analyzed teeth from pairs of normally developing twins that served as the study control group. During fetal and childhood development, a new tooth layer is formed every week or so, leaving an "imprint" of the micro chemical composition from each unique layer, which provides a chronological record of exposure. The team at the Lautenberg Laboratory used lasers to reconstruct these past exposures along incremental markings, similar to using growth rings on a tree to determine the tree's growth history. [Original study.]

  Cross-section of tooth showing laser removal of the dentine layer (in tan), for analysis of metal content. Credit: J. Gregory, Copyright Mount Sinai Health System, 2017

Mediterranean Diet is Healthy Eating – A Good Option for Seniors Another study finding health benefits of a fiber rich diet, which means lots of fruits, vegetables, whole grains, legumes (beans), nuts, and seeds. This time, researchers doing an a analysis of 2 studies lasting over a number of years found that there was an association with more fiber in the diet and less risk of developing knee osteoarthritis pain and of knee osteoarthritis symptoms worsening. The highest fiber group reported eating a median (middle number) 25.5 grams of fiber per day, while the lowest fiber group had a median of about 9 grams of fiber per day. They found a dose dependent relationship - the more fiber, the less osteoarthritis knee pain, and vice versa (the less daily fiber, the more they reported knee pain worsening) - this is called a "dose-dependent inverse relationship". The average fiber intake for Americans is about 15 grams per day.

The researchers also found that the more fiber in the diet, the lower their Body Mass Index (less weight) - but they say they took that into account in the analyses, and found that the amount of fiber intake was the most important thing regarding knee osteoarthritis pain. Interestingly, they did not find an association of fiber intake and x-ray evidence of osteoarthritis.  Note that this was an observational study - it observed that certain things go hand in hand, but it doesn't prove causation.

Osteoarthritis (OA) is common among adults aged 60 years and older, and is sometimes called "wear and tear" arthritis because it affects the joints. It causes pain and limits a person's physical functioning. There is a strong association between obesity, inflammation, and knee osteoarthritis. Obesity causes both inflammation and puts extra weight on the knees, and inflammation results in more joint pain. On the other hand, a high fiber diet reduces inflammation. The researchers point out that the data shows "a consistent protective association" between fiber in the diet and symptoms of knee osteoarthritis (no matter if you're overweight or not). IN SUMMARY: Eat lots of fruits, vegetables, legumes, whole grains, and nuts! From Science Daily:

Fiber-rich diet linked to lowered risk of painful knee osteoarthritis

A fibre-rich diet is linked to a lowered risk of painful knee osteoarthritis, finds the first study of its kind, published online in the Annals of the Rheumatic Diseases. The findings, which draw on two different long term studies, are broadly in line with the other reported health benefits of a fibre-rich diet. These include reductions in blood pressure, weight, and systemic inflammation, and improved blood glucose control.

The researchers mined data from two US studies in a bid to find out if dietary fibre might have any bearing on the risks of x-ray evidence of knee osteoarthritis, symptomatic knee osteoarthritis (x-ray evidence and symptoms, such as pain and stiffness), and worsening knee pain. The first of these studies was the Osteoarthritis Initiative (OAI). This has been tracking the health of nearly 5000 US men and women with, or at risk of, osteoarthritis since 2004-6 (average age 61), to pinpoint potential risk factors for the condition.  The second was part of the Framingham Offspring cohort study, which has been tracking the health of more than 1200 adult children of the original Framingham Heart Study and their partners since 1971.

Analysis of the data showed that eating more fibre was associated with a lower risk of painful knee osteoarthritis. Compared with the lowest intake (bottom 25 per cent of participants), the highest intake (top 25 per cent) was associated with a 30 per cent lower risk in the OAI and a 61 per cent lower risk in the Framingham study. But it was not associated with x-ray evidence of knee osteoarthritis. Additionally, among the OAI participants, eating more fibre in general, and a high cereal fibre intake, were associated with a significantly lower risk of worsening knee pain.

This is an observational study, so no firm conclusions can be drawn about cause and effect. Nevertheless, the researchers say: "These data demonstrate a consistent protective association between total fibre intake and symptom-related knee [osteoarthritis] in two study populations with careful adjustment for potential confounders." [Original study.]

Image result for stethoscope We spend so much on health care, but the USA really lags behind other developed countries in quality of health care. The United States is ranked number 35 on the just released ranking of healthcare quality in 195 countries list. It is called the Healthcare Access and Quality Index, and is a highly regarded and much anticipated analysis, which was just published in the journal Lancet.

How did such health care rankings start? In the late 1970s, some researchers first talked about the idea of “unnecessary, untimely deaths”, and they proposed a list of causes from which death should not occur if the person received "timely and effective medical care". This approach has been modified and extended over time, and now there is a list of 32 medical conditions looked at in 195 countries. The researchers looked at the death rate in each country for the diseases that can be avoided or can be effectively treated with proper medical care. Some of the diseases: diabetes, hypertension, some cancers, appendicitis, etc.

Virtually all the high ranking countries (the top 20) have universal health care, and yet they spend less on medical costs per person. Remember, when one can't afford the costs of medicines or treatments, and consequently dies - then that is the same as a "death panel" or "death sentence". So....is medical care a right for all or a privilege for some? From Medical Xpress:

Which countries have the best healthcare?

Neither Canada nor Japan cracked the top 10, and the United States finished a dismal 35th, according to a much anticipated ranking of healthcare quality in 195 countries, released Friday. Among nations with more than a million souls, top honours for 2015 went to Switzerland, followed by Sweden and Norway, though the healthcare gold standard remains tiny Andorra, a postage stamp of a country nestled between Spain (No. 8) and France (No. 15).

Iceland (No. 2), Australia (No. 6), Finland (No. 7), the Netherlands (No. 9) and financial and banking centre Luxembourg rounded out the first 10 finishers, according to a comprehensive study published in the medical journal The Lancet. Of the 20 countries heading up the list, all but Australia and Japan (No. 11) are in western Europe, where virtually every nation boasts some form of universal health coverage. The United States—where a Republican Congress wants to peel back reforms that gave millions of people access to health insurance for the first time—ranked below Britain, which placed 30th.

The Healthcare Access and Quality Index, based on death rates for 32 diseases that can be avoided or effectively treated with proper medical care, also tracked progress in each nation compared to the benchmark year of 1990. Virtually all countries improved over that period, but many—especially in Africa and Oceania—fell further behind others in providing basic care for their citizens. With the exceptions of Afghanistan, Haiti and Yemen, the 30 countries at the bottom of the ranking were all in sub-Saharan Africa, with the Central African Republic suffering the worst standards of all.

Furthermore, he added in a statement, the standard of primary care was lower in many nations than expected given levels of wealth and development.....Among rich nations, the worst offender in this category [underachievers] was the United States, which tops the world in per capita healthcare expenditure by some measures. Within Europe, Britain ranked well below expected levels.

The gap between actual and expected rating widened over the last quarter century in 62 of the 195 nations examined. "Overall, our results are a warning sign that heightened healthcare access and quality is not an inevitable product of increased development," Murray said.... The 32 diseases for which death rates were tracked included tuberculosis and other respiratory infections; illnesses that can be prevented with vaccines (diphtheria, whooping cough, tetanus and measles); several forms of treatable cancer and heart disease; and maternal or neonatal disorders. [Original study.]

Image result for gout in toe Gout is something that is not discussed that much, but it has been increasing in recent years and now afflicts about  3.9% of adults in the US. Gout is a form of inflammatory arthritis, characterized by recurrent attacks of pain, tenderness, and swelling of a joint, frequently the joint of the big toe. It is caused by elevated levels of uric acid in the blood (known as hyperuricaemia).

Gout occurs more commonly in men ages 40 and older, who eat a lot of meat and seafood, drink a lot of alcohol (especially beer) or sweetened drinks, have high blood pressure, metabolic syndrome, or are overweight.  Gout used to be known as "the disease of kings" or "rich man's disease". [On the other hand, past research has shown that consumption of coffee, cherries, vitamin C foods, and dairy products, losing weight and physical fitness seems to decrease the risk.]

Recent research showed that the DASH diet reduces blood pressure and reduces uric acid in the blood, which is why a research team (study in The BMJ) now looked at  whether it lowers the risk of gout. The Dietary Approaches to Stop Hypertension or DASH diet is high in fruit, vegetables, whole grains, legumes, nuts, and low-fat dairy, and low in red and processed meats, salt, and sugary drinks. On the other hand, the typical Western diet has higher intakes of red and processed meats, sweetened beverages, sweets, desserts, French fries, and refined grains. The researchers analysed data on a total of 44,444 male health professionals, who had no history of gout at the start of the study. During the 26 years of the observational study, they documented 1731 cases of gout.

The researchers found that eating a more DASH type diet - a diet rich in fruits, vegetables, legumes, nuts, whole grains, and low in salt, sugary drinks, and red and processed meats, is associated with a lower risk of gout. On the other hand, a more 'Western' diet is associated with a higher risk of gout. They found that the effects are dose dependent - the more DASH-type diet, the lower the risk of gout. Bottom line: Once again, eating lots of fruits, vegetables, nuts, legumes, and whole grains is linked to health benefits. From Science Daily:

Diet rich in fruit, vegetables and whole grains may lower risk of gout

A diet rich in fruit and vegetables, nuts and whole grains and low in salt, sugary drinks, and red and processed meats, is associated with a lower risk of gout, whereas a typical 'Western' diet is associated with a higher risk of gout, finds a study published by The BMJ.

Gout is a joint disease which causes extreme pain and swelling. It is most common in men aged 40 and older and is caused by excess uric acid in the blood (known as hyperuricaemia) which leads to uric acid crystals collecting around the joints. The Dietary Approaches to Stop Hypertension (DASH) diet reduces blood pressure and is recommended to prevent heart disease. It has also been found to lower uric acid levels in the blood. Therefore, the DASH diet may lower the risk of gout.

To investigate this further, a team of US and Canada based researchers examined the relationship between the DASH and Western dietary patterns and the risk of gout. They analysed data on over 44,000 men aged 40 to 75 years with no history of gout who completed detailed food questionnaires in 1986 that was updated every four years through to 2012.

Each participant was assigned a DASH score (reflecting high intake of fruits, vegetables, nuts and legumes, such as peas, beans and lentils, low-fat dairy products and whole grains, and low intake of salt, sweetened beverages, and red and processed meats) and a Western pattern score (reflecting higher intake of red and processed meats, French fries, refined grains, sweets and desserts). During 26 years of follow-up, a higher DASH score was associated with a lower risk for gout, while a higher Western pattern was associated with an increased risk for gout.

Image result for gout NHS Credit: Both photos of gout on this page are from NHS.UK

 Great article about the importance of both dirt (it's alive!) and exposure to nature. Main points: It’s estimated that children now spend less time outside than the average prisoner, and that that the average American adult now spends 93 percent of their life indoors (in our homes, workplaces, cars, etc.). It is now thought that human beings need to be exposed to lots of microbes when young for proper immune system development - and this means exposure to the microbes in dirt (for example, young children benefit from playing in the dirt!). There is much harm on many levels from monocultures (whether huge fields of only one crop or "perfect" lawns) sustained by large amounts of chemicals (pesticides and fertilizers). In contrast, a lawn with diversity (clover, flowering "weeds", etc) avoids the use of dangerous chemicals, has benefits to wildlife and humans, and is also a "bee habitat".

Also, what is rarely discussed, but very important to the health of our environment: An estimated quarter of a million acres are paved or repaved in the United States each year - so that “asphalt is the land’s last crop". Paving over the land is "soil sealing", because this cuts off air and water, and kills the microorganisms and insects that live there. This results in dirt being killed off forever. Yikes! Why isn't this discussed more? Excerpts from National Geographic:

WHY YOU NEED MORE DIRT IN YOUR LIFE

It’s estimated that children now spend less time outside than the average prisoner. This could have devastating effects: Kids need to be exposed to the microbes in the soil to build up their defences against diseases that may attack them later. But it’s not just children, Paul Bogard explains in his new book, The Ground Beneath Us. The EPA estimates that the average American adult now spends 93 percent of their life indoors. As we retreat indoors, more and more of the earth is disappearing, with an estimated quarter of a million acres paved or repaved in the United States each year

When National Geographic caught up with Bogard by phone at his home in Minnesota, the author explained why Iowa is the most transformed state in the U.S., how soil is alive but we’re killing it, and how places where terrible things happened can become sacred ground.

You write, “We are only just now beginning to understand the vast life in the soil, what it does, and how our activities on the surface may affect it.” Talk us through some highlights of the new science—and how you became so passionate about dirt.

It began with this statistic: that those of us in the Western world now spend about 90-95 percent of our time inside, in our houses, workplaces, in our cars. We’re living our lives separated from the natural world. When we walk outside, many of us walk on pavement. There’s this literal separation from the natural ground, from the soil, the dirt. It made me think, what are the costs of this separation? And it struck me as symbolic of our separation of these many different kinds of grounds that sustain us. Our food, water, energy, even our spirits come from these different grounds.

One of the first scientific discoveries I found was the hypothesis that human beings need to be exposed to the biota in the dirt, on the ground, especially when they’re kids, as a way of inoculating us to diseases that appear later in life. Kids these days are not being exposed to dirt because they’re not allowed to play outside. Their parents think dirt is dirty. But both the newest science and the oldest traditions tell us the same thing, which is that the ground is alive. The ground gives us life. And in the book, I tried to touch on both of those things.

One expert you quote says, “asphalt is the land’s last crop.” Talk about “soil sealing” and how roads and suburbs are literally eating away at the ground beneath our feet.

Soil sealing is one of the most shocking things I learned about. When we pave over the natural ground, we cut it off from the air and water that the life in the ground needs to stay alive. We essentially kill that ground. There is an argument that, if we pulled up the pavement and worked hard to rejuvenate that ground, we could bring it back. But the scientists I talked to said, when you pave it over, it’s the last crop, the last thing that’s going to grow there. We’re not moving in the direction of pulling pavement up. We’re moving in the opposite direction where we’re paving some of our most fertile ground, the ground that we’re going to need to feed a growing population.

You also had childhood affection for Iowa. But when you went back to research your book, you changed your mind. Why?

As a child, I was enamoured with the beauty of the green corn stalks, the black dirt, and what I thought was the natural topography. Coming back older and with a new understanding of the ground, it made me uncomfortable because Iowa is the most transformed state in the union. Some 97 percent of the natural ground has been altered, changed, or transformed. As one biologist said, “it’s an open air monoculture owned by monopolies.” So, instead of my romantic, childhood view of miles of corn stalks, the beauty of life growing, and the colour green, I saw it as this monoculture where another life isn’t allowed to grow.

Americans love their lawns and spend billions of dollars keeping them green and weed free. But we are also paying a high price for this perfect turf, aren’t we?

Oh my! We really are, certainly ecologically, paying a high price. America’s greatest crop, the thing we grow the most of, is our turf grass lawns. And the amounts of pesticides and chemical fertilisers we dump onto these lawns, and the amount of water that we use to grow them, is enormous. As a result, we have problems with runoff draining into our rivers and the lawns themselves tend to become monocultures, where nothing else grows but the turf grass. What a massive opportunity is being lost! We could have lawns that are more biologically diverse and pollinator-friendly. There’s also evidence that a number of illnesses are associated with coming into contact with these chemical fertilisers and pesticides.

 This is a thought-provoking study that looked at environmental quality and cancer incidence in counties throughout the US. The researchers found that the more polluted the county, the higher the cancer incidence. An increase in cancer rates was associated with poorer air quality and the "built environment" (such as major highways). They correctly point out that many things together can contribute to cancer occurring - and this is why looking at how polluted the air, water, etc. together is important.

They looked at the most common causes of cancer death in both men (lung, prostate, and colorectal cancer), and women (lung, breast, and colorectal cancer). They found that prostate and breast cancer demonstrated the strongest associations with poor environmental quality. [Original study.]

The researchers point out that about half of cancers are thought to have a genetic component, but therefore the other half have environmental causes. Other studies already find that environmental exposures (e.g., pesticides, diesel exhaust) are linked to various cancers. But this study was an attempt to look at interactions of various things in the environment with rates of cancer - because we all are exposed to a number of things simultaneously wherever we live, not just to exposures to one thing. Thus this study looked at associations in rates of cancer. 

Of course there is also a lifestyle contribution to many cancers that wasn't looked at here (nutrition, alcohol use, exercise). They also pointed out that many counties in the US are large and encompass both very polluted and non-polluted areas - and that those counties should be broken up into smaller geographic areas when studied. [More air pollution studies.] From Science Daily:

Poor overall environmental quality linked to elevated cancer rates

Nationwide, counties with the poorest quality across five domains -- air, water, land, the built environment and sociodemographic -- had the highest incidence of cancer, according to a new study published in the journal Cancer. Poor air quality and factors of the built environment -- such as the presence of major highways and the availability of public transit and housing -- -- were the most strongly associated with high cancer rates, while water quality and land pollution had no measurable effect.

Previous research has shown that genetics can be blamed for only about half of all cancers, suggesting that exposure to environmental toxins or socioeconomic factors may also play a role. "Most research has focused on single environmental factors like air pollution or toxins in water," said Jyotsna Jagai, research assistant professor of environmental and occupational health in the University of Illinois at Chicago School of Public Health and lead author of the study. "But these single factors don't paint a comprehensive picture of what a person is exposed to in their environment -- and may not be as helpful in predicting cancer risk, which is impacted by multiple factors including the air you breathe, the water you drink, the neighborhood you live in, and your exposure to myriad toxins, chemicals and pollutants."

To investigate the effects of overall environmental quality, the researchers looked at hundreds of variables, including air and water pollution, pesticide and radon levels, neighborhood safety, access to health services and healthy food, presence of heavily-trafficked highways and roads, and sociodemographic factors, such as poverty. Jagai and her colleagues used the U.S. EPA's Environmental Quality Index, a county-level measure incorporating more than 200 of these environmental variables and obtained cancer incidence rates from the National Cancer Institute's Surveillance, Epidemiology, and End Results Program State Cancer Profiles. Cancer data were available for 85 percent of the 3,142 U.S. counties.

The average age-adjusted rate for all types of cancer was 451 cases per 100,000 people. Counties with poor environmental quality had higher incidence of cancer -- on average, 39 more cases per 100,000 people -- than counties with high environmental quality. Increased rates were seen for both males and females, and prostate and breast cancer demonstrated the strongest association with poor environmental quality.

The researchers found that high levels of air pollution, poor quality in the built environment and high levels of sociodemographic risk factors were most strongly associated with increased cancer rates in men and women. The strongest associations were seen in urban areas, especially for the air and built environment domains. Breast and prostate cancer were most strongly associated with poor air quality.

 The research finding of so many baby foods with elevated arsenic levels (above the legal limit) in the European Union made me wonder about arsenic standards in baby cereals in the US. It turns out that the US has "parallel" standards to the European Union. The EU has "maximum 0.1 milligrams of arsenic per kilogram of rice" (this standard has been in place since January 2016), and in  2016 the US the FDA proposed a "maximum allowed standard of 100 ppb (parts per billion)" in infant rice cereal.

Why is there so much arsenic in baby cereal? It's in the rice - rice plants absorb arsenic from the soil (it may be naturally occurring in the soil or in the soil because of arsenic pesticides that were used for years). And why should we be concerned about arsenic in food? The health effects of regularly consuming infant rice cereal — and other rice-based products —containing traces of arsenic are currently unclear. But...the researchers stated that early-life exposure to arsenic, even at low concentrations, is of particular concern because infants and young children are especially vulnerable to the adverse health effects of arsenic. Arsenic is a carcinogen (causes cancer), and can have "neurological, cardiovascular, respiratory and metabolic" effects.

A Harvard Health Publication (Harvard Medical School) publication in 2016 stated: "In high doses it is lethal, but even small amounts can damage the brain, nerves, blood vessels, or skin — and increase the risk of birth defects and cancer." The FDA found that inorganic arsenic exposure in infants and pregnant women can result in a child’s decreased performance on certain developmental tests that measure learning, based on epidemiological evidence including dietary exposures.

So what should parents do? The American Academy of Pediatricians (AAP) encourages that babies and toddlers eat a variety of foods, and that this will decrease a child's exposure to arsenic from rice. They also encourage other options as first foods (rather than just rice cereal), such as oat, barley, and multigrain cereals - all of which have lower arsenic levels than rice cereal. From Science Daily:

New research shows illegal levels of arsenic found in baby foods

In January 2016, the EU imposed a maximum limit of inorganic arsenic on manufacturers in a bid to mitigate associated health risks. Researchers at the Institute for Global Food Security at Queen's have found that little has changed since this law was passed and that 50 per cent of baby rice food products still contain an illegal level of inorganic arsenic. Professor Meharg, lead author of the study and Professor of Plant and Soil Sciences at Queen's, said: "....Babies are particularly vulnerable to the damaging effects of arsenic that can prevent the healthy development of a baby's growth, IQ and immune system to name but a few."

Rice has, typically, ten times more inorganic arsenic than other foods and chronic exposure can cause a range of health problems including developmental problems, heart disease, diabetes and nervous system damage. As babies are rapidly growing they are at a sensitive stage of development and are known to be more susceptible to the damaging effects of arsenic, which can inhibit their development and cause long-term health problems. Babies and young children under the age of five also eat around three times more food on a body weight basis than adults, which means that, relatively, they have three times greater exposures to inorganic arsenic from the same food item.

The research findings, published in the PLOS ONE journal today, compared the level of arsenic in urine samples among infants who were breast-fed or formula-fed before and after weaning. A higher concentration of arsenic was found in formula-fed infants, particularly among those who were fed non-dairy formulas which includes rice-fortified formulas favoured for infants with dietary requirements such as wheat or dairy intolerance. The weaning process further increased infants' exposure to arsenic, with babies five times more exposed to arsenic after the weaning process, highlighting the clear link between rice-based baby products and exposure to arsenic.

In this new study, researchers at Queen's also compared baby food products containing rice before and after the law was passed and discovered that higher levels of arsenic were in fact found in the products since the new regulations were implemented. Nearly 75 per cent of the rice-based products specifically marketed for infants and young children contained more than the standard level of arsenic stipulated by the EU law.[Original study.]

A 2016 study done in New Hampshire also showed that babies eating rice cereals and other rice-based snacks had higher amounts of arsenic in their urine compared to infants who did not eat rice foods. From JAMA Pediatrics: Association of Rice and Rice-Product Consumption With Arsenic Exposure Early in Life